Evaluating by Comparisons: Watch Out for the Traps!

By Susan J. Ellis

A few months ago, I received the following question in my e-mail:

We have just completed our year end report for 2000 and I am looking for information which would help us better evaluate our volunteer program. Can you help me find information on number/percentage "norms" for: 1) number of people who contact us about volunteering, compared to number who actually attend an introductory meeting; 2) number of those who attend an introductory meeting and actually become volunteers; 3) yearly rate of increase in volunteer numbers; and 4) yearly attrition rate for a volunteer program.

This was not the first time I've been asked about an external "standard" against which a specific agency can measure the effectiveness of its volunteer effort. Here is how I answered:

Your question - while certainly seeking useful information - unfortunately presumes a far better database of statistics about volunteering than exists! I do not know of any national database that would help you. On the other hand, I think you are looking for 'generic' comparative data, which may not really be what you need.

The variety of things that volunteers do is so enormous that it would be hard to create measurements relevant to all. For example, in the paying work world, would you think of comparing attrition/retention rates of nurses to, say, ditch diggers, or to astronauts? Yes, there are Labor Department data, but no one tries to find a 'standard' answer for every category of employees.

Also, things like whether or not the program 'increased' may not measure effectiveness! I know of many programs that would be more successful if they cut their volunteer corps in half and retained only those volunteers qualified to help! Further, 'retention' is measured not by longevity but by how long each volunteer told you s/he was going to stay.

So here are my suggestions:

1. Stop looking for outside validation of your program. Instead, articulate reasonable, measurable goals and objectives for the work you need volunteers to do. For example, rather than some arbitrary, 'we want to grow 15% next year,' how about: 'Next year we'd like to have enough qualified volunteers to respond to 90% of requests made of the department.' See the difference?

2. If you do feel you want to compare yourself to others, then focus on other organizations similar to yours, perhaps through a professional society for your field or setting. Then contact those types of organizations and see if you can get their statistics as a comparison to yours.

3. Perhaps you would be better off comparing internal data from one year to the next. So, if 60% of volunteers remained committed after training in 1998, what's the percentage in 2000 and why?"

As I sent this response, I knew that I wanted to re-visit this in a Hot Topic at some point. It sounds so reasonable for someone to ask, "what happens elsewhere?" Sometimes this makes sense, but not when the question being studied is fundamental to the provision of services integral to a specific setting. Some more things to consider are:

  • Did you begin a period by articulating what you want volunteers to accomplish? Too many evaluations start at the end, which is too late. It's only possible to analyze data against initial desired goals. You may discover you doubled the volunteer workforce this year, but did you need to triple it?
     
  • As important as collecting data is interpreting it correctly. For example, if you discover that 50% of volunteers drop off after 6 months, you need to look for clues as to why. Asking what the "norm" is for volunteer retention in other settings is irrelevant. In fact, it focuses the attention (blame) on volunteers - trying to find a cause inherent in them - rather than considering what is going on in your setting.
     
  • After you have assessed your own situation, some things might, in fact, be useful to know about trends or issues faced by others - at least those who are dealing in a similar context. Some examples: Are others finding that high school students are most likely to cite transportation problems as a reason they can't volunteer? How are others dealing with volunteers who are over age 85 and perhaps losing some abilities? These types of questions allow you to distinguish what is and isn't the result of your managerial actions. But no external data is going to mean anything in deciding whether the attrition rates of volunteers in YOUR organization are acceptable or not.

The only other thing I want to say about comparisons is the old problem of colleagues asking: "What do other volunteer program managers earn as a salary?" This is a great object lesson in the fallacy of general comparisons. If we look at the volunteer management field as a whole, salaries range from maybe $12,000 to $80,000. So what? Maybe (but only maybe) it might be valid to see what volunteer program managers in similar settings earn, so that at least you aren't equating the organizational staff budgets of a huge health care system with a one-room rape crisis center. But isn't the REAL question: "How does my salary compare to the other people on staff in MY setting who work at the department head level?"

If we can see this problem with comparisons on salary, it ought to be clear why a similar, internal approach is needed in assessing the accomplishments of volunteers.

What has been your experience? Are there external comparisons that you found useful?

Receive an update when the next "News and Tips" is posted!


 

Permission to Reprint