Evaluation of Cultural Programs
There’s a lot of buzz about evaluation these days. Are programs effective? Do they make the library, and by extension, the community, a better place? Do they accomplish what we intend and/or do they sometimes have other, maybe even better, unintended consequences?
There can also be a certain amount of performance anxiety about evaluation, especially if it’s required to justify or secure funding and/or we don’t trust the results. But resources are too scarce these days to do much of anything just because it seems like a good idea. Programming competes for public and private resources and has to prove its worth.
The first step in any evaluation process is figuring out what you want to know. Once that’s settled, it’s easier to move on to how you’re going to get the answers. While it’s true that some things can’t be measured, or may be difficult to measure, once you start to build a regular pattern of evaluation, you may be surprised to see how much you can find out.
This can be a complex subject, and is worth both some serious study and maybe the use of an outside consultant to build evaluation methods that work best for your library and your programs. But here are some guidelines to get you started:
- What will change as the result of your program? This can apply to changes for the audience, the library, library staff, relationship with a partner, etc.
- Set targets for those changes, so you have something to measure against. If you want to increase attendance, set a goal. If you want to get more publicity, set a goal and see how you do. If you want to reach a different audience (teens, men, new library card holders, etc.), set a goal for that.
- Decide how you’ll collect information. The first thing most of think of is an evaluation form to be filled out at the program, but there are lots of other methods. Observation, follow up emails, interviews can all get at information that surveys don’t reveal. For example, observing whether or not an audience asks questions can usually tell you their level of engagement with the program.
- Be systematic. Use the same forms for programs so data can be compared.
- Keep evaluation forms short and to the point and consider incentives for having people complete them. You’d be surprised how much a piece of chocolate will get you!
- Don’t just quantify, qualify. Collect anecdotal information, also called “impact stories,” to go with the data.
- Organize evaluation forms and survey for easy tabulation—don’t ask a lot of open-ended questions, especially of a large sample, unless you have time and means to deal with the answers.