play icon for videos

How to ask open ended questions

Learn how to ask open-ended questions effectively with our insightful guide. An effective way to learn stakeholder voice.
Category
Best Practices
Written by
Madhukar Prabhakara
Published on
March 26, 2024

Learn how to ask open-ended questions effectively with our insightful guide. An effective way to learn stakeholder voice.

Power of Open-Ended Questions

In our quest for meaningful conversations, the skill of asking open-ended questions stands as a powerful tool. But why should we bother to master this art? Open-ended questions, unlike closed-ended ones, invite depth, reflection, and a broader spectrum of responses. They pave the way for exploration, understanding, and connection. Embracing them opens doors to richer interactions and deeper insights. So, let's delve into the world of open-ended questions and discover their transformative potential.

Why bother asking open ended questions?
Why bother asking open ended questions?

Insights from the Field: The Power of Open-Ended Questions

This week's article is based on a true story that the Sopact Team came across during a meeting with a large organization that runs multiple social programs. We wanted to let you in on the topic, as we believe it is relatable to all of us who run surveys.

We were demonstrating our product's AI capability to analyze open-ended feedback and turn them into meaningful insights.

"Why would you bother asking open-ended questions?"

"It's so hard to analyze and you could just ask option-based question and get it over with."

(Check the image below that's exactly what we were demonstrating)

How to ask effective open ended question?
How to ask effective open ended question?

For a long time even we had shied away from asking open-ended questions as analyzing them was hard, manual and labor-intensive process. We used to recommend our users to go for standard option-based responses only, to make our lives easier at the analysis stage.

However, we quickly realized that the problem was not asking open-ended questions but the manual and labor-intensive process of deriving insights from these responses.

Today, we thought let's take the opportunity to explain to a wider audience why asking open ended questions is not only important but almost mandatory and how, in this day and age, no one has to dread the manual effort to derive insights anymore.

"Why would you bother asking open-ended questions?"

"It's so hard to analyze and you could just ask option-based question and get it over with."

Let's take a hypothetical example: Say you conduct a resume-building workshop for people in your skill-building program so they can get better jobs and improve their quality of life as a result.

To know your workshop's impact, maybe you ask a couple of questions on a survey:

To which you might get:

20% rate it with 1,

30% rate it with 3,

and 50% rate it with 5.

Yes, you have some numbers in nice charts. But:

  1. You didn't get any insights.
  2. You cannot use it to persuade other participants as to why they should join your resume building workshop.
  3. You cannot use it to improve your program and its workshops.
  4. You literally didn't get any direction, at all.

This is where open-ended queries clearly outpace close-ended or option-based questions.

Imagine you ask this instead:

Can you give us feedback on the resume building workshop and the impact it had on your job search and interview calls?

And you might get these kinds of responses:

The first response clearly states that the student benefitted long after graduating from the program with successful job switches. The second one states that it helped boost the student's confidence, which, as we know, can be the difference between someone succeeding and someone not.

Let's take a hypothetical example: Say you conduct a resume-building workshop for people in your skill-building program so they can get better jobs and improve their quality of life as a result.

To know your workshop's impact, maybe you ask a couple of questions on a survey:

This nuance is impossible to capture in an option-based question, but this is the actual depth needed to truly understand the impact and help you make a program better, raise funds to continue run the program, or whatever the motivation.

Now, by analyzing post-program feedback, we can see real insights. What are their experiences like after attending the workshop? Did their confidence improve? Did they land more or less interviews? What about the ones that did not attend?

You might ask, if I have 100+ responses, how am I supposed to do this analysis?

Our answer: This is what AI should actually be used for social good!

For eight years now, our purpose at Sopact has been to keep up with technological advancements that can be implemented to make things easier for the social impact industry. Some good and some not really. We know now that AI is here to stay, so why not put it to good use? 

100 responses or 1000, our AI-generated Thematic Analysis does what used to seem impossible (or, put more objectively, endless). Our new product, Sopact Sense, uses AI to identify patterns from these open-ended responses, and all you see is the insights you need to understand your program's impact better than before.

Many of our first users gave this a go for their one-year survey for their undergrad scholarship program. They dared to switch from the typical option-based questions:

"Select if you have experienced the following."

← To →

"In your own words, tell us about the impact of the scholarship in your life?"

And their discovery was huge after AI thematic analysis:

Not only were their participants mentioning"financial relief","possibility to enroll", and"possibility to graduate", which was expected. But 72% of them were also mentioning"avoidance of debt". This came as a true surprise to them, as they had never anticipated that some of the students were actually able to graduate free of debt. Which in their own words"is more impact than we had imagined".You can too make the switch, open yourself to open-ended questions and AI-thematic analysis.

Foundation Head:

So, Girls Code CEO, tell me about your impact. How are you empowering the next generation of tech leaders?

Girls Code CEO:

I'm glad you asked! We've been using Sopact Sense to measure our impact, and the results are truly exciting.

Girls Code: Empowering the Next Generation of Tech Leaders

Girls Code is dedicated to boosting confidence and skills of young women in STEM. Our impact, measured by Sopact Sense, shows significant improvements in coding abilities and self-assurance.

Let's Hear Our Stakeholder Voice Outcomes:

"Girls Code has significantly boosted the confidence and skills of young women in STEM. Before our program, 70% of participants lacked confidence in their coding abilities. After our workshops, this number dropped to 23%, and the average coding test scores increased from 53 to 72. Additionally, 70% of our participants had never built a web application before, which dropped to 26% post-workshop. These girls are not another participants but future tech leaders."

Foundation Head:

Wow, those are impressive results! The data clearly shows the tangible impact you're making. How has using Sopact Sense helped you communicate your impact more effectively?

Girls Code CEO:

Sopact Sense has been a game-changer for us. It not only helps us collect and analyze data more efficiently, but it also generates these powerful, data-driven summaries. This makes it much easier for us to communicate our impact to stakeholders, potential donors, and partners. It's helping us tell our story in a compelling, evidence-based way.

Foundation Head:

That's fantastic. It's clear that you're not just teaching coding, you're building confidence and opening doors for these young women. Keep up the great work!

Girls Code: Empowering Future Tech Leaders

Measuring our impact: From workshop to long-term outcomes

Program Highlights

Increase in Coding Test Scores

0%

Participants Built Their First Web App

0%

Boost in STEM Confidence

0%

Program Recommendation Score

0

Impact Over Time

Job Interview Improvement
36% → 10%
Positive Feedback
40% → 26%
Resume Enhancement
23% → 10%
Confidence Boost
53% → 16%
Positive Impact
96% → 30%

Key Outcomes

  • Lack of confidence in coding skills decreased from 70% to 23% post-workshop.
  • 63% of participants reported increased confidence in STEM skills.
  • The percentage of participants who had never built a web application decreased from 70% to 26%.
  • Average coding test scores improved from 52.77 to 71.87, a 36% increase.

Maximizing Impact: Designing Effective Workforce Development Programs

Track workforce success. Centralize structured and unstructured data for smarter decisions and continuous growth.
email newsletter image

Get useful, spam-free insight direct to your inbox every month.

Spam-free and secure!
Thank you! Your submission has been received!
Oops!
Something went wrong while submitting the form.
Please try again.