AASP: Metrics Best Practices
Jon Thorsen (one of my favorite peeps!) of George Washington University and Ellen Duero Rohwer of JCA, Inc. are co-facilitating a session on metrics best practices. Those of you who've talked to me about metrics know I have some strong feelings on the topic. I'm looking forward to this talk.
Jon says: "Everyone wants standards and performance measures... right up to the point where they are measured."
And if we are not measuring performance, we are cheating our donors. It's akin to asking folks to invest in a company that does not look at its bottom line.
Jon points out how important face to face communication is with major gift officers, who are face to face people.
If you are in a higher ed setting, use student workers to shore up data entry. In other settings, rely on your admin folks. Send them to talk to development officers in person to get info into the system. It's not how the data gets in, it's that it gets in.
Focus on the business process, not the database. Use the database to support the business processes, instead of vice-versa.
Serve as a translator. If you are a data person, don't take your terminology for granted. Data folks suffer from the "curse of knowledge" and don't necessarily explain processes in language that frontline fundraisers will understand.
Use metrics visibility to stimulate data entry. Peer pressure can convince development officers to track their metrics. As I say, "If it's not in the database, it didn't happen."
But be careful: make sure your metrics are not encouraging so much competition that they are dampening collegiality. "Shared credit" or "assist credit" can be a way to ameliorate this -- that encourages development officers to work together. It's vital that this be an actual factor in performance reviews -- all metrics should be included in performance reviews, not just dollars raised. And what about credit for researchers and donor relations?
Watch out for "Juneables" -- what can we close by fiscal year end? One solution: look at close amount vs. capacity rating to ensure that development officers aren't asking too small.
Definitions are important. What is a "visit"? At Jon's institution, visits are defined as "pre-planned", i.e. running into someone at the grocery store does not count. They also look at portfolio penetration or "unique visits" to ensure that development officers are not taking the same prospect to lunch each month with no new prospects in the mix.
Another unique idea: count characters in a contact report to determine whether it's valuable. A 30-character contact report ("Had lunch. It was great.") is simply not valuable.
What about portfolio sizes? Factors that influence size are gift size (principal gift officers are expected to have smaller portfolios), management responsibilities, and tenure (newer officers have smaller portfolios).
Advancement Services should not be the enforcer of metrics. I always used to say, "We don't make the news, we just report the news." It's crucial to partner with senior leadership, so that the development officer's manager is responsible for accountability.