I recently had a client who wanted to figure out how to show the impact of their services. They specifically wanted to conduct a survey that would say “after receiving this service x% of people saw y% improvement.” The question posed was how to achieve this. Here are a few options:
Option 1: The client wanted to ask a multiple choice question of all people who experienced the service.
Q: How has Y improved since you had this service?
A: 0%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 100%. Unfortunately, this question has several problems such as too many answer choices, and asking respondents to analyze their behavior and quantify it. Most would struggle to answer this question and the analysis of the responses would be even more difficult.
Option 2: Conduct a case study with one customer. Identify key indicators of a positive impact and measure them before and after the service. This option will probably give you accurate numbers and high believability but all customers may not identify with this customer’s specific situation, which would reduce the value of the evidence.
Option 3: Conduct a survey or collect data on all customers as they sign up for the service to measure key indicators. Survey all customers after they receive the service and ask the questions that measure the same key indicators and ask about their overall evaluation of the service. This is the best option for the results they are seeking. An example of #3 is below.
All of these examples demonstrate that you have to think about customer evidence from the point of engaging a customer for a service or product. If you wait to measure the impact of the service at the end you will have no point of comparison and will be left asking the client to comment on their ability to measure their own improvement.
Below is an example of measuring the impact of a program using pre-surveys and post-surveys. This comes from a study on the impact of parenting classes.
Participants (includes 53 participants in this evaluation period) were asked on their intake and exit surveys to note how many times they read to their child during the course of a week. The chart to the right shows the results of the intake (47 individuals answered this question) and exit survey (43 individuals answered). In comparing the data results for the 43 participants that answered this question on the intake and exit surveys, 33% read to their children with the same frequency, 49% read with their children more and 19% read with their children less after completing the program.
How do you measure impact?
- Stephanie Vanterpool, Senior Director