play icon for videos
Use case

Theory of Change: Model, Components & Training (Free)

Build a theory of change model that drives decisions. Learn components, create diagrams, compare vs logic models. 5-step framework with real examples

TABLE OF CONTENT

Author: Unmesh Sheth

Last Updated:

April 17, 2026

Founder & CEO of Sopact with 35 years of experience in data systems and AI

Theory of Change

Your funder asks how your program creates change. You produce a PDF of colored boxes and causal arrows — built eight months ago in a consultant-led workshop, signed off once, untouched since. Your program data lives in three spreadsheets that map to none of the boxes in the diagram. That gap is The Framework Delay Tax — the compounding cost of a consultation-driven Theory of Change designed before any evidence existed to test it.

This guide makes a different argument than most. The consultation-first era is over. AI-native tools have collapsed the cost of framework extraction, qualitative analysis, and IRIS+/SROI alignment from months to seconds. The bottleneck has moved — from building the framework to collecting the right data and owning it end to end. Whether you are writing your first proposal, rebuilding a framework sitting on a wall, or aligning an existing Theory of Change to a funder's reporting standard, the path forward is the same: collect multimodal data from day one, let Sopact AI extract the framework from what you are already capturing, and watch it evolve with evidence rather than preceding it.

Last updated: April 2026

Era Consultation Driven TOC is Over
The Framework Delay Tax
Focus On Effective Data with AI Builds Powerful Narrative

The Framework Delay Tax is the compounding cost of spending six to twelve months designing a Theory of Change before collecting a single data point that could test it. Every month of framework design is a month of untestable assumptions and wasted intake. By the time the framework is finalized, the data architecture — built separately — cannot answer its questions.

01
Define causal logic

Name the if-then chain and every assumption holding it together.

02
Map six components

Inputs → Activities → Outputs → Outcomes → Impact plus assumptions.

03
Connect to data

Link each outcome stage to a collection instrument before launch.

04
Test & update

Revise assumptions quarterly as evidence accumulates — not annually.

What Is a Theory of Change?

A Theory of Change is a causal explanation — in plain language — for how and why your program produces change in the people you serve. It is not a mission statement, not a list of activities, not a program description. It is a testable hypothesis: if we do this activity with this population, then this change will occur, because this mechanism is in place. Every "because" is the part most organizations skip.

A Logic Model describes what you do. A logframe formalizes it into a matrix. A Theory of Change adds the third dimension — the causal mechanism and every assumption holding it together. When Carol Weiss coined the term in the 1990s, the point was never the diagram. The point was making beliefs explicit enough that data could confirm or disconfirm them. For the specific operational differences between a Theory of Change and a Logic Model — when each is required, what each controls — see the Theory of Change vs Logic Model guide. For worked examples across workforce, education, healthcare, and agriculture sectors, see the Theory of Change examples guide.

The operational test for whether your framework is a theory or a narrative: name three causal links in it and, for each, the specific condition under which that link would fail. If you cannot, the framework is decoration. If you can, the conditions become monitoring questions — and the framework becomes something data can test.

Theory of Change Model: The Six Components

Every Theory of Change model — regardless of sector, funder, or framework tradition — contains the same six structural components plus a hidden seventh layer. Understanding each precisely tells you what data to collect from day one, because the components are not conceptual labels. They are data collection triggers.

Interactive · Six components

The causal pathway — problem to impact

Click any stage to see what it measures, the instrument it needs, and how it differs from the stage before it.

+

Each stage requires a different collection instrument — connected by a persistent stakeholder ID.

The first component is the problem statement — a precise articulation of who is affected, what causal conditions produce the problem, and why existing approaches have not solved it. This is the evidence base for why your program exists, not a restatement of your mission. Inputs are what you commit before activities begin — funding, staff time, facilities, curriculum, partnerships. Activities are the specific designed interventions you deliver using those inputs. Outputs are the direct countable products of activities. If stopping the program makes the metric disappear immediately, it is an output, not an outcome.

Outcomes are observable, measurable changes in stakeholders' knowledge, skills, behavior, or conditions as a result of your activities. Track them at three time horizons: short-term (during or immediately after the program), medium-term (3–12 months), and long-term (1+ year). Each horizon requires different data collection instruments connected to the same persistent stakeholder ID — otherwise you are tracking different populations at different moments, not the same people over time. Impact is the long-term systemic change your program contributes to across years.

The seventh layer — absent from most frameworks — is assumptions. Every arrow in a Theory of Change diagram carries an assumption. "Skills lead to confidence." "Employers value portfolios over credentials." "Participants have transportation to the training site." Some of these will break. A framework that never names its assumptions can never be improved when they fail, which is how the Framework Delay Tax compounds — year after year, invalid assumptions stay in the framework because no one built the monitoring questions that would detect them failing.

Most popular · Core fundamentals

An introduction to Theory of Change — the fundamentals

The most-watched explainer in our Theory of Change series. A plain-English walk-through of what a Theory of Change is, what each of the six components actually measures, and how to tell a working framework from one that exists only to satisfy a grant application.

Introduction to Theory of Change — the fundamentals masterclass with Unmesh Sheth
▶ Fundamentals Most watched
What you'll learn
  • 01 What a Theory of Change actually is — and what it is not.
  • 02 The six components and how each measures something different.
  • 03 Why outputs get confused with outcomes — and how to stop doing it.
  • 04 How to name assumptions without paralyzing your framework.
  • 05 The single sentence that turns a narrative into a testable theory.

How to Design a Theory of Change in the Age of AI

The most expensive mistake in impact measurement is not choosing the wrong framework. It is spending months perfecting one before collecting a single data point, and discovering at year-end that the measurement system built alongside it cannot answer the framework's questions. AI-native tools change the sequence entirely — the framework is no longer the starting artifact, it is the emergent one.

Playlist opener · 5-step series

How to design a Theory of Change in the age of AI

The most expensive mistake in impact measurement is spending months perfecting a Theory of Change before collecting a single data point. This session opens a five-step playlist on building one in the age of AI: collect multimodal data from day one under unique stakeholder IDs, let the framework emerge from the conversations you are already having, and turn a recorded Zoom transcript into a living narrative that evolves with evidence rather than preceding it. For fund managers, accelerator directors, and program evaluators, this is the architecture that saves 80% of the time most organizations waste on framework-first design.

How to design a Theory of Change in the age of AI — Sopact AI masterclass
▶ Playlist opener 5-step series
🆔 Collect at source

Unique stakeholder IDs from day one. Documents, interviews, open responses, surveys — all centralized, zero fragmentation.

🧠 Analyze natively

AI surfaces themes across qualitative and quantitative data together — assumption testing built in, not bolted on.

🔄 Learn continuously

Mid-cycle corrections, real-time insights, a theory that evolves with evidence rather than preceding it.

The old sequence ran in one direction: design the Theory of Change in a workshop, get sign-off, then build data collection around it. The new sequence inverts that order. Start collecting multimodal data from day one under unique stakeholder IDs — documents, interviews, open-ended responses, survey data, recorded calls — and let the framework emerge from what you are already capturing. A recorded Zoom call between a funder and a grantee already contains the raw material: problems discussed, activities described, outcomes hoped for, assumptions made in natural language. Sopact AI extracts that structure automatically into a working framework you can refine across cycles.

The architecture has three stages. Collect at source: unique stakeholder IDs from first contact, every document and response and survey centralized under the same identity chain, validation at entry rather than retrospective cleaning. Analyze natively: AI reads qualitative and quantitative data together, surfaces themes across hundreds of responses in minutes, tests named assumptions against arriving evidence rather than waiting for year-end review. Learn continuously: mid-cycle corrections before the cohort closes, real-time insights that shape the next cohort, a theory that evolves with evidence rather than preceding it.

This approach saves roughly 80% of the time most organizations waste on retrospective data cleaning. For fund managers, accelerator directors, program evaluators, and grant makers, this is the architecture that converts a Theory of Change from a framework artifact into a live decision-making tool. The playlist opener below walks through the full five-step sequence.

Sopact Sense · Framework Intelligence · Demo Preview

From stakeholder input to a working framework

See the framework, alignment, and intelligence report Sopact Sense produces.

Step 1 of 7 Your role
Six principles

Six moves that replace consultation-driven Theory of Change

Each principle flips a traditional assumption — from framework-first to data-first, from consultant-authored to AI-extracted, from compliance reporting to narrative reporting in the age of AI.

Start free →
01
The pivot

Stop paying to build what AI extracts

Consultation-driven frameworks made sense when qualitative analysis took weeks and IRIS+ alignment was a specialist service. Those costs have collapsed. The expensive part of measurement is no longer constructing the framework — it is collecting the right data and owning it end to end.

A framework without data behind it is a document, not a theory — the Framework Delay Tax in its purest form.
02
🆔 Data ownership

Own the longitudinal record, not the diagram

The framework can be regenerated in seconds. Stakeholder data cannot be reconstructed retroactively. Persistent IDs assigned at first contact — carried forward across every cohort, cycle, and grant renewal — are the organization's irreplaceable asset.

Snapshots of different populations at different moments prove nothing. Longitudinal identity does.
03
📥 Collect what matters

Qualitative, longitudinal, contextual — from day one

Open-ended responses, interview transcripts, recorded calls, financial context, narrative reflections — all centralized under the same identity chain. In the age of AI, collecting this kind of multimodal evidence is no longer expensive; skipping it is.

The data you do not collect at first contact cannot be added later — the cohort has already moved on.
04
AI automation

Let Sopact AI automate framework extraction

A Zoom transcript, a proposal document, a program page — Sopact AI reads them and produces a six-component causal pathway with named assumptions in minutes. Your role shifts from constructing the framework to refining what the evidence reveals.

Six months of workshopping produces an untestable artifact. AI extraction produces a working hypothesis before lunch.
05
🏷️ Standards in seconds

Align to IRIS+, SROI, and IMP automatically

Sopact AI maps each indicator to its IRIS+ code, layers financial proxies for SROI return calculation, and organizes outputs into the IMP Five Dimensions (Who / What / How Much / Contribution / Risk). Gaps are flagged honestly where no match exists.

A framework that cannot be read against the standards funders use adds friction at every report cycle.
06
📖 Narrative reporting

Reports where story and numbers come from the same record

Numbers without narrative are compliance. Narrative without numbers is marketing. Sopact generates context-driven reports where both flow from the same stakeholder records, regenerated every cycle as new evidence arrives.

Output-heavy reports are increasingly discounted — funders now expect the "why" traceable to named participants.

Why Consultation-Driven Theory of Change Is Over

For two decades, a Theory of Change was built the same way across almost every organization: hire a consultant, run a three-day workshop, produce a PDF with boxes and arrows, attach it to the grant application, file it afterward. The consultation-driven model made sense in its era. Frameworks were expensive to construct by hand. Qualitative synthesis required weeks of manual coding. Aligning to standards like IRIS+ or SROI required specialists who spoke those dialects fluently. That era is over.

Three shifts have collapsed the logic of the consultant-first approach. Qualitative analysis that used to take weeks now takes minutes — the single largest cost driver in framework consulting is automated. Longitudinal data that used to require a full-time M&E hire now flows automatically from persistent stakeholder IDs assigned at first contact. Framework extraction from existing materials — transcripts, proposals, program pages, meeting recordings — is now the default starting point, not a premium service. The expensive part of measurement is no longer building the framework. It is collecting the right data and owning it end to end.

Consultation-driven Theory of Change did not fail because consultants were wrong about what the framework should contain. It failed because the framework was built in the absence of data about what was actually measurable — and by the time the organization realized its measurement system could not validate the theory the consultant wrote, the program cycle was closed and the next consulting engagement was already scheduled. Organizations paid twice for a framework that never evolved. The economics simply no longer work.

What replaces the consultant is not a cheaper consultant. It is a fundamentally different workflow: data collection designed first, AI-assisted framework extraction second, expert review as refinement rather than as origination. The expertise still matters — it just moves from building the initial artifact to interpreting what the evidence reveals.

Data Ownership: Collecting What Matters Most

If the framework is no longer the bottleneck, what is? Data ownership. The most valuable asset in impact measurement is no longer the diagram on the wall — it is the longitudinal record of each stakeholder, under a persistent identity, across every touchpoint the program created. That record lives inside the organization that collected it, carries forward into every grant application and renewal, and compounds in value every cycle. It is the organization's irreplaceable evidence, and no consultant can build it retroactively once the cohort has left.

Four types of data matter most, and collecting them effectively is where modern tools change what was previously possible:

Qualitative evidence at scale. Open-ended responses, interview transcripts, recorded calls — these used to require days of manual coding per cohort and typically got cut from analysis when budgets tightened. Sopact Sense reads them as they arrive, surfaces themes across hundreds of responses in minutes, and tags them against the outcome stages in your framework automatically. Qualitative evidence is no longer the premium add-on; it is the primary causal signal alongside the numbers.

Longitudinal identity. A participant who enrolled at baseline, answered a mid-program check-in, completed an exit survey, and responded to a 90-day follow-up is now one record — not four disconnected entries that a data team has to manually reconcile every reporting cycle. Persistent IDs assigned at first contact are the architecture that makes causal claims possible. Snapshots of different populations at different moments do not prove change. Longitudinal tracking of the same individuals does.

Contextual data that surrounds outcomes. Financial proxies for SROI, labor market indicators that contextualize employment outcomes, community health baselines that contextualize wellbeing shifts — context data flows into the same record as the stakeholder data. Your outcomes arrive carrying the qualifier that makes them meaningful: not "income rose" but "income rose faster than the regional median for this demographic during this period."

Narrative evidence that carries causal weight. The three sentences a participant writes in a reflection prompt — what changed, what held it back, what would help next time — is no longer marketing decoration layered on top of quant. Modern qualitative AI treats narrative as primary causal evidence, weighted equally with the rating scale that accompanied it. This is the single biggest shift in what funders can expect from a Theory of Change report: the story and the numbers arrive in the same document, traceable to the same stakeholders.

When these four data types are collected from day one under a single identity chain, the Theory of Change becomes something Sopact's impact intelligence platform can extract, refine, and keep current automatically. The framework is no longer the asset. The data is the asset. The framework is the organized view of what the data already shows.

Framework Alignment: IRIS+, SROI, and Impact Management in Seconds

A Theory of Change that stops at the internal causal diagram is useful for program staff. A Theory of Change aligned to the frameworks funders actually use — IRIS+ for impact investors, SROI for return calculations, the IMP Five Dimensions for structured reporting — is useful externally as well. Alignment used to be a specialist service that added months to framework development. Sopact AI collapses it to seconds, because alignment is pattern-matching against structured reference libraries — exactly the task AI does faster than humans without losing accuracy.

IRIS+ alignment. Map each outcome indicator in your framework to its closest IRIS+ code. Where no exact match exists, Sopact flags it honestly rather than forcing a fit — because the quality of your reporting depends on the quality of the alignment, not on its completeness. This is also how multi-theme organizations unify reporting across unrelated program areas. A healthcare program's "number of patients served" and a workforce program's "number of jobs created" both roll up to a shared "number of stakeholders reached" metric at the portfolio level, with theme-specific drill-down beneath it. The portfolio report becomes coherent without flattening the program-level specificity.

SROI with live financial proxies. Monetization values for social outcomes — wellbeing gains, employment income lifts, reduced public-system costs — used to require custom economic modeling for each program. Sopact AI layers proxy values from published libraries (UK Social Value Bank, HACT, peer-reviewed sources) onto your outcomes and calculates return ratios that update as new cohorts close. Your funders see an SROI that reflects this cohort, not the estimate produced at program design.

IMP Five Dimensions. Who is affected, What changes for them, How Much the change is, your organization's Contribution to that change, and the Risk that the change does not hold — the IMP structure is the cleanest framing for portfolio-level reporting. Sopact organizes your extracted framework into the five dimensions automatically, with gap flagging where a dimension is under-evidenced. A weak Contribution story is visible before the report goes to the board, not after.

The combined output is context-driven narrative reporting: your Theory of Change, aligned to the standard the reader expects, with the narrative evidence that makes the numbers meaningful, regenerated every reporting cycle from the same underlying data. Fund managers and foundation portfolio teams running multi-program impact analysis can see the full architecture on the impact intelligence solutions page. For the specific mechanics of connecting a Theory of Change to monitoring and evaluation indicators — including how to sequence baseline, midline, and endline instruments — see the Theory of Change in monitoring and evaluation guide. For broader context on the measurement and management architecture that makes all of this possible, see the impact measurement and management guide.

Comparison · Document vs System

Four patterns that guarantee the Framework Delay Tax

The specific structural mistakes that turn a well-designed Theory of Change into a filed PDF — and how a testable system avoids each one.

01

Built for the grant, not the program

Framework created to satisfy a funder requirement — not to guide program design or measurement decisions.

02

Outcomes without instruments

Outcome stages asserted in the diagram but never connected to a named data collection instrument before launch.

03

Assumptions left implicit

Causal links carry unstated assumptions that are never monitored — discovered as failures only at year-end reporting.

04

No longitudinal identity

Participants tracked in aggregate snapshots — no persistent ID connecting baseline, program, and follow-up data.

Design decision ToC as a documentThe Framework Delay Tax applies ToC as a testable systemBuilt inside Sopact Sense
When built For the grant proposal — before program design begins In parallel with data collection instrument design
Outcomes Copied from similar organizations or funder templates Derived from stakeholder evidence and intake data
Assumptions Implicit or listed once and never revisited Named, assigned monitoring questions, tested each cycle
Data connection None — measurement designed after the framework Each outcome linked to a named collection instrument
Participant tracking Aggregate counts — no individual longitudinal records Unique stakeholder IDs from enrollment through follow-up
Update cadence Annual strategic planning, if at all Quarterly — updated by evidence as data accumulates
Who uses it Grant writers and board members Program staff, M&E teams, funders, and participants
What Sopact Sense delivers
  • Causal framework architecture Every outcome stage linked to a named data instrument from program launch.
  • Unique stakeholder IDs Persistent from first contact through long-term follow-up — no manual reconciliation.
  • Assumption monitoring Each assumption connected to a mid-program question — tested every cycle.
  • Longitudinal records Pre-post and multi-cycle participant data enabling real causal analysis.
  • Disaggregated data Outcomes segmented by demographic variables captured at entry — no post-hoc work.
  • Funder-ready reports Impact reports that match your ToC by construction — not assembled after the fact.

Sopact Sense builds your Theory of Change as a testable system — not a document filed after the grant is won.

See how it works →

Tips, Common Mistakes, and What to Do Next

Name your three most fragile assumptions before your program launches — the beliefs most likely to fail first. Assign a monitoring question to each and embed it in the mid-program check-in instrument. When an assumption fails, you see it in the data in week six, not in the funder's year-end feedback.

Separate outputs from outcomes in every report. "We trained 200 people" is an output. "143 gained job-ready skills and 89 secured employment" is an outcome. Funders increasingly discount output-heavy reports for exactly this reason — the aggregate count tells them what you did, not whether it worked.

Start simple and refine from evidence. A working six-stage hypothesis built in an hour using the interactive template builder is more valuable than a perfect framework built over six months. The hour-long version gets tested against data. The six-month version gets filed. That is the Framework Delay Tax in its purest form.

Own your data from first contact. Every stakeholder who touches your program — applicant, participant, alumni, employer partner — gets a persistent ID at the moment of first contact. Every subsequent data point links to that ID. This is not a technical detail; it is the architecture that makes causal claims possible and makes every future report meaningfully comparable to past ones. For sector-specific worked examples of this architecture in action, see the Theory of Change examples guide.

Let AI do the structural work. Framework extraction, IRIS+ alignment, SROI calculation, qualitative theme surfacing, cross-cohort comparison — all of this is pattern-matching work that modern AI does faster and more consistently than humans. Your organization's role shifts from constructing the framework to interpreting what the evidence reveals when the framework meets the data.

Frequently Asked Questions

What is a Theory of Change?

A Theory of Change is a causal explanation for how and why specific program activities produce specific outcomes for a specific population. It maps the if-then logic from inputs through activities, outputs, and outcomes to long-term impact — naming the mechanisms and assumptions behind each link. Unlike a mission statement, a Theory of Change makes testable predictions that data can confirm or disconfirm.

What is the Theory of Change model?

The Theory of Change model is the six-component causal pathway: problem statement, inputs, activities, outputs, outcomes, and impact — with assumptions at every transition. Inputs measure resources. Activities measure effort. Outputs measure delivery. Outcomes measure change in participants. Impact measures systemic change over time. Named assumptions form the critical seventh layer that makes the model testable.

What is a Theory of Change framework?

A Theory of Change framework is the complete structure — diagram, indicators, named assumptions, and the data architecture that tests them. A diagram shows what you believe; a framework proves whether those beliefs hold. The framework is complete when each component has named indicators, each assumption has a monitoring question, and each instrument connects to persistent stakeholder IDs linking baseline, program, and follow-up records.

What is consultation-driven Theory of Change?

Consultation-driven Theory of Change is the traditional approach in which an external consultant facilitates a multi-day workshop to design the framework, produces a PDF artifact, and hands it off to the organization — typically for inclusion in a grant proposal. The approach was sensible when framework construction, qualitative analysis, and standards alignment were manual and expensive. AI-native tools have collapsed those costs, shifting value from framework construction to data collection and data ownership.

Why is data ownership more important than the framework?

Data ownership matters more than the framework because the framework can now be extracted from data in seconds, but the data itself cannot be reconstructed retroactively. A persistent longitudinal record of every stakeholder — under a single identity chain, carrying forward across cohorts and grant cycles — is the organization's irreplaceable asset. The framework is the organized view of what that data already shows, regenerable whenever needed.

How long should building a Theory of Change take?

A working Theory of Change hypothesis should take under an hour using the interactive template builder or AI-assisted extraction from existing program documents. Spending six to twelve months in consultant-facilitated framework workshops before collecting any data is the Framework Delay Tax — it produces an untestable artifact because the data needed to validate it was never designed to answer its questions.

What is the Framework Delay Tax?

The Framework Delay Tax is the compounding cost of spending months designing a Theory of Change before collecting evidence that could test it. Every month of framework design is a month of untestable assumptions and wasted intake data. Sopact Sense eliminates this tax by building the Theory of Change inside the data collection system from day one — so the framework evolves with evidence.

Can AI build a Theory of Change?

Yes. A recorded conversation between a funder and grantee already contains the raw material — problems discussed, activities described, outcomes hoped for, assumptions made in natural language. Sopact's AI workflow extracts this structure automatically and creates a working framework in minutes. The Sopact AI GPT ebook contains the prompt library for reproducing this extraction.

Can Sopact AI align my Theory of Change to IRIS+?

Yes. Sopact AI maps each outcome indicator in your framework to its closest IRIS+ code and honestly flags indicators where no exact match exists — because the quality of reporting depends on the quality of the alignment, not on forcing a fit. Multi-theme organizations can also use IRIS+ alignment to unify reporting across unrelated program areas through shared top-level metrics with theme-specific drill-downs.

How does Sopact handle SROI financial proxies?

Sopact layers proxy values from published libraries (UK Social Value Bank, HACT, peer-reviewed sources) onto your outcomes and calculates return ratios that update as new cohorts close. Organizations can also provide their own proxy values where their context differs from published norms — the calculation updates live against the actual cohort data rather than against the estimate produced at program design time.

What is the difference between outputs and outcomes in a Theory of Change?

Outputs are the direct products of activities — sessions delivered, participants enrolled, hours of training. Outcomes are changes in participants — increased skills, changed behavior, improved conditions. The diagnostic test: if stopping the program makes the metric vanish immediately, it is an output. If the change persists in participants over time, it is an outcome. Funders increasingly discount output-heavy reports for exactly this reason.

What is the difference between a Theory of Change and a Logic Model?

A Logic Model is a compact linear diagram (inputs → activities → outputs → outcomes) focused on operational clarity and compliance reporting. A Theory of Change adds the causal mechanisms and assumptions explaining why those activities produce change for your specific population. Most programs benefit from both tools. See the full side-by-side comparison in the Theory of Change vs Logic Model guide.

How does a Theory of Change connect to monitoring and evaluation?

Each Theory of Change outcome stage maps to an M&E indicator and each named assumption maps to a monitoring question. When both are designed together, monitoring data continuously tests your causal claims; when designed separately, insights arrive too late to influence the program that generated them. See the Theory of Change in monitoring and evaluation guide for the complete integration architecture.

What does a Theory of Change example look like?

A worked Theory of Change example names a specific population, problem, set of inputs and activities, measurable outputs and outcomes at defined time horizons, and a long-term impact — with named assumptions at each causal transition. For full sector-specific examples across workforce, education, healthcare, and agriculture, see the Theory of Change examples guide.

Next step · Build yours

Build a Theory of Change that tests itself

The Framework Delay Tax closes when your Theory of Change is built inside your data collection system — not alongside it. Sopact Sense assigns unique stakeholder IDs at enrollment, connects every outcome stage to a collection instrument, and surfaces failing causal assumptions before the end of the grant cycle.

  • Framework extracted from transcripts, proposals, or program URLs in minutes.
  • Indicators aligned to IRIS+, IMP Five Dimensions, or SROI — your choice of standard.
  • Gap report shows which framework questions your data cannot yet answer.
TABLE OF CONTENT

Author: Unmesh Sheth

Last Updated:

April 17, 2026

Founder & CEO of Sopact with 35 years of experience in data systems and AI

Training Series Theory of Change — Full Video Training
🎓 Nonprofit & Foundation Teams ⏱ Self-paced Free
Theory of Change Training Series — Sopact
Ready to build your own Theory of Change? Sopact Sense turns every outcome statement into a live measurement instrument — no spreadsheets, no manual reconciliation.
Watch Full Playlist
TABLE OF CONTENT

Author: Unmesh Sheth

Last Updated:

April 17, 2026

Founder & CEO of Sopact with 35 years of experience in data systems and AI