"Where the heck do I start?
How do I convince my team to do this?
What technology do I use?"
Will I be able to use it?
Will it require a lot of training?
It seems like an expensive affair, accessible only to big orgs.
IMM seems so hard, will I be able to do it?"
These are questions we often encounter while talking to customers and prospects and, trust us, you are not alone if you have these questions. In this newsletter, we are going to take a no-nonsense, downright practical approach to measuring impact.
You will see how it doesn't have to be chaotic to do Impact Measurement which is essentially the same as Program Evaluation for larger organizations running multiple programs.
Let's get started! Imagine a hypothetical scenario:
Your program trains young girls (ages 16, 17, and 18) in coding to get them jobs in the tech industry to help improve their quality of life.
As part of the program, you are doing two things.
- Training them on coding computer applications
- Conduct a resume-building workshop to get them to develop impactful resumes and interviewing skills.
Now how would you go about measuring the impact of program?
There could be many primary and secondary impacts, but for this example, we will stick to being simple.
It always pays to start small and simple rather than trying to think of everything under the sun.
We may think we need to track the number of girls who get jobs and say they improved their quality of life.
Two problems with this approach:
- It doesn't help us in improving our programs in any way.
- Neither does it conclusively tell us that getting a job was a direct result of our program.
To effectively measure the impact of our activities we need to think of what effect each of the activities have on moving the needle towards "helping the girls get a job in the tech industry to increase the chances of improving their quality of life".
For example:
Even before they get a job we need to see what the impact of resume building workshop was on their interview opportunities.
- Did they get more interview opportunities?
- Did they even get one interview as a result?
- What about how they feel about the program? Did it improve their confidence levels to face the interviews?
There are so many more questions we could ask, but here is where we want to stress the importance of keeping it simple and to keep the focus on what will bring valuable information for our programs to become better. So, let's keep the focus on the above 3 questions.
Now, to answer them, we need data on 2 data points:
- Did the participating girl attend our resume-building workshop? (Y/N)
- Can you give us feedback on the resume-building workshop and the impact it had on your job search and interview calls? (Individual feedback)
The second question is an open-ended question because we need to collect nuanced feedback which increases an organization's chances of collecting all kinds of responses, positive, negative, and somewhere in between.
The following screenshot shows the analyzed responses. We can clearly see how girls who attended the resume-building workshop improved their interviewing prospects.
The interesting thing here is, that data from both a quantitative question and a qualitative question was used to understand what influence the resume build workshop had on girls' interviewing prospects.
This process can be very hard when you don't have clarity on the goal, but also when you don't have the right tools to fit your resources (time, skill, experience). On our app, this happens seconds after the data is collected because this is driven by AI and our proprietary technique.
The more interesting thing here is that the insight just came from two questions!
The main point here is that measuring impact doesn't need to be a crazy affair and neither an expensive one. It does require organizations to think simply, be more honest about what they want to learn about their programs and start small before getting into analysis paralysis. Interested?