play icon for videos

Logic Model: 5 Components and What to Track in Each

A logic model framework maps inputs, activities, outputs, outcomes, and impact. Learn components, see examples, and connect every arrow to evidence.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 3, 2026
360 feedback training evaluation
Use Case
Logic model · Programs

A logic model is a one-page picture of your program. It connects what you do to what changes. Most teams draw one and never come back.

This guide is for nonprofit and impact-program staff who need to draw a logic model that holds up when a funder asks whether the program worked. You will find plain definitions of each of the five components, four worked examples across workforce, public health, education, and social work, and the practices that keep the diagram connected to evidence instead of filed in a folder.

  • The five components and what to track in each
  • Plain-language definitions for the head terms
  • Six practices that keep the diagram honest
  • An after-school literacy worked example
  • Three nonprofit shapes, same model
  • Common questions, answered
The Five Components

Inputs, activities, outputs, outcomes, impact: what each component tracks

The W.K. Kellogg Foundation formalized the logic model components (inputs, activities, outputs, outcomes, impact) in its development guide, and the CDC, USAID, United Way, and most foundations now use the same five columns. Each column answers a different question about your program. The discipline is making sure your data answers the question for every column, not only the first three.

01

Inputs

What did we put in?

Track: budget, staff time, materials, partner commitments, technology.

02

Activities

What did we do?

Track: workshops held, sessions delivered, counseling provided, services rendered.

03

Outputs

How much got delivered?

Track: people enrolled, completion counts, hours delivered, attendance rate.

04

Outcomes

Who actually changed?

Track: knowledge gains, behavior change, employment status, reading level, health status.

05

Impact

What persisted?

Track: long-term wages, sustained behavior, recidivism rate, generational outcomes.

What has to be true between each column

Resources reach the program on schedule.

Activities are delivered as designed.

The right people enroll and complete.

Activities cause the change, not other factors.

Short-term gains hold up over time.

These are assumptions, not facts. Every program has them. Strong programs write them down next to each arrow and check whether they held at the end of each cycle.

Source: W.K. Kellogg Foundation Logic Model Development Guide (2004), adopted by the CDC, USAID, and United Way. The five columns are the standard. What you track in each is what makes them useful.
Masterclass · 6 min

The logic model architecture most nonprofits get wrong

The logic model architecture most nonprofits get wrong, a Sopact masterclass by Unmesh Sheth on the five-stage causal chain

Unmesh Sheth, Founder and CEO, Sopact, walks through the five-stage chain and where most programs lose the connection between activities and outcomes.

Definitions

Plain definitions for the head terms

The same five-column structure goes by several names in foundation guides, federal program manuals, and academic textbooks. The definitions below cover the head terms together so you can map whatever vocabulary your funder uses to the components your data already tracks.

What is a logic model?

A logic model is a one-page picture of a program. It shows the resources you put in, the activities you do with those resources, the things those activities produce, the changes for participants, and the longer-term effect the program is meant to contribute to. It was developed at the W.K. Kellogg Foundation as a planning and evaluation tool and is now the most widely required framework in nonprofit grant applications and public funding.

A logic model is most useful when each of its boxes connects to specific data your program already collects (or plans to collect): a budget line, a milestone count, a survey question, a follow-up response. A logic model that names the boxes without naming the data is a diagram. A logic model that names both is a measurement plan.

Logic model meaning

The meaning of a logic model is captured in its purpose: a planning and evaluation tool that connects what a program puts in to what changes for participants. Each column of the diagram answers a different question. Inputs answer what you spent. Activities answer what you did. Outputs answer how much got delivered. Outcomes answer who actually changed. Impact answers what persisted over time.

The logic model is not a strategic plan, a theory, or a mission statement. It is a working document for staff, funders, and evaluators to share the same picture of how a program is meant to work and which parts are being measured.

What is a logic model framework?

A logic model framework is the standard five-column structure (inputs, activities, outputs, outcomes, impact) used to describe how a program is meant to work. It was formalized by the W.K. Kellogg Foundation Logic Model Development Guide and adopted by the CDC, USAID, United Way, and most major foundations. Some versions add a "Situation" or "Problem" column at the far left; the Kellogg framework treats the five components as the core.

A framework is a discipline more than a template. The discipline is that every box has to justify its existence by connecting to a measurable change. Generic templates in Word documents or PowerPoint slides encourage treating the framework as a compliance artifact: boxes filled, arrows drawn, model saved as PDF. A logic model template built for measurement instead treats the framework as a data schema, where each column is connected to a question in the survey and a field in the participant record before enrollment opens.

What are the 5 components of a logic model?

A logic model has five components: Inputs (resources invested), Activities (what the program does with those resources), Outputs (the direct, countable products of activities), Outcomes (the changes in knowledge, skills, behavior, or conditions for participants), and Impact (the longer-term systemic change the program contributes to). The line between outputs and outcomes is where most logic models get into trouble.

Inputs

Resources the program invests: budget, staff time, materials, partner agreements, technology. Inputs answer the question: what did we put in? Inputs are usually well-tracked because they are tied to budget lines and contracts.

Activities

What the program does with the inputs: workshops, counseling sessions, tutoring, case management, mentorship pairings. Activities answer the question: what did we do? Most programs track activities through attendance logs and session schedules.

Outputs

The direct countable products of activities: people enrolled, sessions delivered, hours of instruction, completion counts. Outputs answer the question: how much got delivered? Outputs confirm that activities happened. They do not, on their own, prove that anything changed for participants.

Outcomes

The changes in participants: knowledge gained, behavior changed, employment status, reading proficiency, health status, housing stability. Outcomes answer the question: who actually changed? This is the column funders care about most. It is also the column most programs track least, because it requires a survey or measurement at intake and again at exit.

Impact

The longer-term effect the program contributes to: sustained employment at living wage, reduced recidivism, improved community health, breaking intergenerational poverty. Impact answers the question: what persisted? Impact requires follow-up measurement at 90 days, 180 days, or one year after program exit. It is the column most often theoretical.

What is a logic model in social work?

A logic model in social work uses the same five-column structure as any other field, but social work logic model outcomes are often non-linear because stabilization is itself an outcome rather than a step toward another outcome. Common social work logic model outcomes include housing stability at 90 days, employment engagement, reduced crisis service utilization, and self-reported safety and wellbeing.

Social work logic models also need to handle case-based work where the unit of analysis is the family or household, not only the individual. The five components still apply: inputs (case managers, partner agencies), activities (intake, service coordination, safety planning), outputs (cases opened, referrals completed), outcomes (stability, engagement, reduced harm), impact (breaking intergenerational cycles). The difference is that the arrow between outcomes and impact is rarely linear and should not be forced into a workforce-style chain.

Related but different terms

Funders use these words interchangeably; they should not be used that way. Each one does a different job.

Often confused with

Theory of change

A logic model describes the program. A theory of change explains why the program should work, including the assumptions and contextual conditions. Most programs need both, built from the same data.

Often confused with

Logframe

A logframe is a four-by-four grid (goal, purpose, outputs, activities, with indicators and means of verification on the other axis). It overlaps with a logic model but is more rigidly linked to project-cycle management. USAID and the EU use logframes; most US foundations use logic models.

Often confused with

Results framework

A results framework is a hierarchical version of the same idea, with a single development objective at the top and intermediate results below. USAID uses this format. The mechanics are similar to a logic model with explicit indicators at each level.

Often confused with

Strategic plan

A strategic plan is an organizational document covering goals, priorities, and resource allocation. A logic model describes how a single program is meant to work. An organization can have one strategic plan and a dozen logic models.

Six Practices

Six practices that keep a logic model honest

Most programs draw a logic model for the grant application and never look at it again. The six practices below are how teams that actually use their logic models keep them connected to evidence, cycle after cycle. None of them require new software. They require choosing what to track in each box and sticking with the choice.

01 · Design

Start at impact, work backwards

Begin with the change you want, not the workshop you already run.

Pick the long-term change first. Then the outcomes that lead to it. Then the activities. Then the inputs. Designing forward from activities traps the model in describing what staff already do, which is never the point of an evaluation framework.


Why it matters. If you start from the workshop, every workshop survives the design process, including the ones nothing depends on.

02 · Track

Pick what to track in each box

A box without a data field is a promise the program cannot keep.

For every box in the diagram, write down exactly what data captures it. Inputs become budget lines and partner agreements. Activities become attendance logs. Outputs become count rollups. Outcomes become survey questions at intake and exit. Impact becomes follow-up survey questions at 90 or 180 days.


Why it matters. A logic model with five clean boxes and zero data fields is a flowchart, not an evaluation framework.

03 · Identify

Use the same ID across every survey

One person, one ID, every survey from intake through follow-up.

Assign a participant ID at first contact. Use it on intake, on exit, and on every follow-up. Without a shared ID, exit surveys cannot be matched back to baseline, and 180-day follow-ups cannot be matched to either. The last two columns of the diagram become impossible to test.


Why it matters. Sarah Johnson at intake and S. Johnson at follow-up are the same person, but to a survey tool with no shared ID they are two records that need a human to match.

04 · Distinguish

Tell outputs and outcomes apart

Trained 25 people is an output. 12 got hired is an outcome.

Outputs count what the program did. Outcomes count what changed for participants. Funders want the second. Most programs only track the first because attendance logs are easier to produce than baseline-and-exit surveys. Strong programs build both into the same data system from day one.


Why it matters. If the outcomes column reads like a list of activities, the program will fail its next funder review.

05 · Assume

Write down what has to be true

Every arrow has assumptions. Surface them before they break.

Employer partners must engage. Participants must commit the time. The labor market must remain stable. The curriculum must be culturally appropriate. Each of these is an assumption sitting on an arrow in your diagram. Write them down next to the relevant arrow and check whether they held at the end of each cycle.


Why it matters. Programs usually find out an assumption broke 12 months later, in a final evaluation report, when it is too late to adjust.

06 · Revisit

Come back to the model every cycle

A model that never changes after data arrives is a model nobody is reading.

After each cohort or reporting period, check which boxes had supporting evidence, which arrows held, which assumptions broke, and what should be rewritten. Logic models are working documents, not deliverables. Programs that revise the model after each cycle are the programs whose models keep getting more accurate.


Why it matters. A logic model that stays the same after three cycles of data is a logic model that nobody is using to learn from the program.

The through line: every practice above is a way to keep one box in the diagram connected to evidence. Programs that follow all six end up with logic models that are usable for decision-making rather than for compliance alone.

Decisions

Six choices that decide whether your logic model holds

Most logic model failures trace back to six decisions made (or not made) before the first participant enrolls. Each row below is one decision. The broken way is what happens when the decision is left to whoever happens to set up the survey. The working way is what happens when the decision is made on purpose.

The choice
Broken way
Working way
What this decides
01 Where you start drafting Beginning of the diagramming process.
Broken

Start at activities. Write down the workshops the program already runs, then try to connect each one to an outcome. Result: every existing activity survives, including the ones nothing depends on.

Working

Start at impact, work backwards. Pick the long-term change first, then outcomes, then activities, then inputs. Activities that do not produce a needed outcome get cut before they are listed.

Whether the model is a description of what staff already do or a plan for what the program needs to produce.

02 When you write the survey Timing of measurement design.
Broken

Write the model first. File it. Six months in, ask the M&E team to put together a survey. They build one based on what data is convenient to collect, not what the model needs.

Working

Write the survey questions while writing the model. Each outcome in the model is paired with the question that captures it before enrollment opens. Same document, same review.

Whether the data answers the question the model asks or only whatever is convenient to collect.

03 How you identify participants Linking surveys across time.
Broken

Collect name and email on each survey. Match them by hand at reporting time. Sarah Johnson at intake becomes S. Johnson at exit and a new email address at follow-up. Three records, one person, no clean match.

Working

Assign one participant ID at first contact. Use it on every survey: intake, exit, 90-day, 180-day. Surveys link automatically. No reconciliation step. Outcomes can be matched to baselines without a human in the loop.

Whether the long-term outcome columns can be measured at all, or stay theoretical in perpetuity.

04 What you track per box Coverage across all five components.
Broken

Count outputs because attendance logs are quick to produce. Estimate outcomes from anecdotes. Skip impact entirely. Final report shows three of the five columns; the funder asks about the other two.

Working

Each of the five components gets one or more specific data fields written down at design time. Inputs in the budget tracker. Activities in the session log. Outputs in the rollup. Outcomes in the survey. Impact in the follow-up.

Whether the funder report describes delivery only or includes measured change for participants.

05 When you check assumptions Testing the conditions for each arrow.
Broken

List assumptions in the appendix. Discover at the end of year one that an employer partner stopped hiring, the labor market shifted, and three of the assumptions never held. The arrows above them cannot be tested.

Working

Write each assumption next to the relevant arrow. Check it at the end of each cohort or quarter. When one breaks, update the model and the program design before the next cycle, not after.

Whether the program adjusts in flight or finds out 12 months late that the model never matched reality.

06 Where the model lives Connection between diagram and data.
Broken

Logic model in Word. Surveys in Google Forms. Attendance in Excel. Analysis in Tableau. Four tools, four ID schemes, zero link between the diagram and the data. Reporting takes 11 to 14 days of reconciliation per cycle.

Working

Model and data in one system. Each box in the diagram corresponds to a field, a question, or a rollup. The model is the schema. Reporting takes hours, not weeks, because nothing needs to be reconciled.

Whether the team spends evaluation cycles analyzing the data or reconstructing it.

Compounding effect

These six decisions stack. Decision 01 (where you start drafting) controls what ends up in the model. Decision 06 (where the model lives) controls whether anyone can use it. Get decisions 01 and 06 right and the middle four follow naturally. Get either one wrong and the others usually go wrong with it.

Worked Example

An after-school literacy program: a logic model in practice

Below is one of the most common nonprofit shapes: an after-school literacy program serving second through fourth grade students in a single school district. The five components are familiar. What separates the version that works from the version that gets filed is what gets tracked in each box, and whether outputs and outcomes can be matched to the same student over time.

We had a logic model from the original grant. Five clean columns, arrows between them, color-coded. By the third year, attendance rolls were our entire reporting story. Reading-level gains were anecdotal. The board kept asking whether the kids who started behind were catching up. We had data on hours of tutoring delivered but no way to connect a tutoring count back to a specific student's reading score, because the scores lived in the school district's system and the tutoring logs lived in a Google Sheet.
Program manager, after-school literacy nonprofit, mid-cohort cycle.
Output count

Tutoring hours delivered

Quick to count. Pulled from the attendance log every Friday. Tells you the program ran. Tells you nothing about whether students improved.

Outcome change

Reading-level gain per student

Captured at intake and at end-of-cohort using the same assessment. Tells you who improved, by how much, and which students did not. Useful to a board, useful to a funder, useful to the next cohort design.

The two axes only become useful when they are linked at the student level: same student ID on both the tutoring log and the reading-level assessment. Without that link, the program reports hours; with it, the program reports change per hour invested.

Sopact Sense produces

A logic model that updates as data arrives

  • Inputs tracked at the budget line. Tutor hours, materials, partner-school commitments rolled into the inputs column automatically.
  • Outputs rolled up by student ID. Sessions delivered linked to each student, not aggregated as a program total. The program can see who got the dose.
  • Outcomes captured per student. Reading-level scores at intake and at end-of-cohort entered against the same student ID. Pre and post comparable per child.
  • Impact measurable at follow-up. 90-day and 180-day surveys go to the same student IDs. Grade-promotion data and continued reading habits can be linked back to the cohort.
Why traditional tools fail

A logic model that stops at outputs

  • Inputs in the budget tracker. Activities in the attendance log. Outputs in a spreadsheet rollup. Three different systems, three different IDs.
  • Outputs as program totals. "We delivered 3,200 tutoring hours" is the only output the program can report; per-student dosage cannot be reconstructed without manual matching.
  • Outcomes anecdotal or absent. Reading-level scores live in the district's system; the program does not have access in a usable format. Outcome reporting relies on teacher quotes.
  • Impact column blank. No follow-up at 90 or 180 days because there is no shared ID and no system to send the surveys. The last column of the diagram stays empty across every reporting cycle.

The integration is structural, not procedural. Sopact Sense does not add a step to "match the data" at reporting time. The match happens at collection, because the same participant ID is on the intake form, the assessment, and the follow-up survey. The logic model and the data sit in the same system. Reporting becomes a query, not a reconciliation project.

Three Nonprofit Shapes

Same logic model, three different operational realities

Whichever way your nonprofit is shaped, the logic model is the same five columns. What changes is where the diagram and the data lose contact with each other. The three shapes below cover most of the US nonprofit and impact-program landscape.

01 · Archetype

Multi-program nonprofit

3 to 12 programs under one roof, each with its own logic model.

Typical shape. A single nonprofit runs workforce, youth, and community-health programs side by side. Each program has its own logic model, often drawn by a different team in a different template, with outcome language that does not match across programs. The executive director needs an organization-wide outcome story twice a year.

What breaks. Three programs, three Word templates, three Google Forms, three Excel files, three different ID schemes. When the board asks for a cross-program outcome rollup, the M&E team spends three weeks reconciling. Outcomes that were measured at all cannot be compared because the questions were worded differently in each program.

What works. One shared outcome taxonomy across all programs: same knowledge-gain question, same employment-status question, same wellbeing scale wherever they apply. Organization-wide participant IDs so the same person across two programs is counted once. Board-ready outcome views that update as data arrives, not at year-end.

A specific shape

A community-development nonprofit runs four programs in parallel: workforce training, youth tutoring, financial coaching, and senior services. With one shared participant schema, the board sees that 18 percent of workforce participants are also enrolled in financial coaching, and outcomes for that subset are measurably stronger. None of that was visible from the four separate spreadsheets.

02 · Archetype

Partner-delivered network

One logic model at headquarters, many implementing partners delivering locally.

Typical shape. A national or regional intermediary defines a logic model for a program, then funds local nonprofits to deliver it. The headquarters team owns the framework. Partner organizations own the day-to-day data. The funder sees only the rolled-up report.

What breaks. Each partner uses their own tools and definitions. Partner A measures employment at 30 days. Partner B measures it at 90 days. Partner C does not measure it at all. Headquarters spends 60 percent of its evaluation cycle translating partner data and 40 percent analyzing it. Funder reports lag the partner cohorts by 6 to 12 months.

What works. A single shared survey schema that every partner collects into. Partner-level views preserved for local accountability; headquarters rollups generated automatically. Network-wide participant IDs, so a beneficiary served by two partners is counted once and tracked across both.

A specific shape

A foundation funds a six-partner workforce program across four states. With a shared schema, headquarters can answer the board question "what is the 90-day employment rate across the network" in the meeting itself, with current data, not three weeks later from a stitched-together report.

03 · Archetype

Single-program nonprofit

One program, one cohort at a time, one logic model that has to hold.

Typical shape. One bootcamp, one fellowship, or one cohort-based program running multiple times a year with the same curriculum. A single logic model covers the work. There are no other programs to hide behind in board reports.

What breaks. The logic model lives in a Word document. Surveys live in Google Forms. Attendance lives in Excel. Outcomes are tracked sometimes; follow-up is rare. When a funder asks whether the program worked, the director points to attendance and completion stats. Next year's cohort runs unchanged because no evidence came back to revise the model.

What works. The logic model, the surveys, and the participant tracker all sit in one connected system. Every cohort tests the same outcomes with the same questions, so cohort-over-cohort comparisons are real. Funder reports show measured change with traceable data, not estimates from attendance. Each cohort informs the next.

A specific shape

A regional coding bootcamp running three 12-week cohorts a year. With outcomes captured at intake and exit using the same questions across cohorts, the program can show its employment rate dropping 8 points after a curriculum change and rising 12 points after a follow-up adjustment. Decision-grade evidence, not anecdote.

The pattern is the same across all three shapes. The first three columns of the diagram (inputs, activities, outputs) get tracked. The last two (outcomes, impact) are where the data thins out or stops. Closing the gap is an architecture decision, not a template decision.

Masterclass · 4 min

Output vs outcome: 7 rules to measure what actually changed

Output vs Outcome, 7 Rules to Measure What Actually Changed, a Sopact masterclass by Unmesh Sheth

Unmesh Sheth, Founder and CEO, Sopact. Four minutes on the seven rules that separate output reporting from outcome reporting.

Logic Model Software

Logic model software: what it does and what it doesn’t

Search results for "logic model software" return two different kinds of tools. Most readers do not realize they are different categories until they buy the wrong one for the job they actually need to do.

  • SurveyMonkey
  • Google Forms
  • Lucidchart
  • Visio
  • Sopact Sense

Diagramming and survey tools. Lucidchart, Visio, Miro, Canva, and methodology platforms like MissionMet and DoView help you draw a logic model. SurveyMonkey, Google Forms, and Qualtrics help you collect responses to the questions in it. These are good tools for the jobs they do. They do not, on their own, link the diagram to the data, and they do not preserve a participant identity across surveys, which is what the outcome and impact columns of a logic model require.

Measurement systems. Sopact Sense is a different category. The logic model becomes the schema. Each component connects to a survey question, a tracked count, or a follow-up. The same participant ID flows through intake, exit, and 90 or 180-day follow-ups, so the columns on the right side of the diagram (outcomes, impact) can actually be tested. The diagram and the data sit in one system rather than being reconciled in a spreadsheet at reporting time.

The gap most teams hit

Logic model software that draws the diagram cannot tell you whether the diagram held. Survey software that captures responses cannot link them across time without a shared participant ID. The gap between "we have a logic model" and "we have evidence the logic model worked" is where most programs live for years. Closing it is not a tool feature; it is an architecture choice.

Common Questions

Logic model questions, answered

The questions below cover what nonprofit and impact-program staff most often need to know when designing or updating a logic model. Each answer is plain enough to share with a board member and specific enough to give a program officer somewhere to start.

01

What is a logic model?

A logic model is a one-page picture of a program. It shows the resources you put in, the activities you do with those resources, the things those activities produce, the changes for participants, and the longer-term effect the program is meant to contribute to. The W.K. Kellogg Foundation formalized the five-column format in its development guide and most foundations and government funders now require one in grant applications.

02

What is a logic model framework?

A logic model framework is the standard five-column structure (inputs, activities, outputs, outcomes, impact) used to describe how a program is meant to work. It was formalized by the W.K. Kellogg Foundation Logic Model Development Guide and adopted by the CDC, USAID, and most major foundations. It is a discipline more than a template: every box should connect to a measurable change, or it does not belong in the diagram.

03

What are the 5 components of a logic model?

The five components of a logic model are: Inputs (resources invested, like budget, staff, materials, partner commitments), Activities (what the program does with those resources, like workshops or counseling sessions), Outputs (the direct countable products of activities, like people enrolled or hours delivered), Outcomes (the changes in knowledge, skills, behavior or conditions for participants), and Impact (the longer-term change the program contributes to). The line between outputs and outcomes is where most logic models get into trouble.

04

Logic model meaning

The meaning of a logic model is captured in its purpose: it is a planning and evaluation tool that connects what a program puts in to what changes for participants. Each column of the diagram answers a different question. Inputs answer what you spent. Activities answer what you did. Outputs answer how much got delivered. Outcomes answer who actually changed. Impact answers what persisted over time.

05

What is a logic model example?

A workforce logic model example: Inputs are 180K budget, three staff, and curriculum licenses. Activities are a 12-week coding bootcamp with mentorship. Outputs are 120 enrolled and 85 percent completion. Outcomes are confidence score gains from 2.1 to 4.3 and 12 participants employed within six months. Impact is economic mobility through living-wage tech employment. Every arrow between columns is a connection the program should test with evidence.

06

What is a logic model in social work?

A logic model in social work uses the same five-column structure, but outcomes are often non-linear because stabilization is itself an outcome rather than a step toward another outcome. Common social work outcomes include housing stability at 90 days, employment engagement, reduced crisis service utilization, and self-reported safety and wellbeing. Logic models in social work need to capture the non-linearity rather than forcing case-based work into a workforce-style outcome chain.

07

What is the difference between outputs and outcomes in a logic model?

Outputs are the countable products of program activities (sessions delivered, participants served, hours of instruction). Outcomes are the changes in knowledge, skills, behavior, or conditions that result from those activities. Trained 25 people is an output. 18 participants gained job-ready skills and 12 secured employment is an outcome. Funders want outcomes. Most programs only track outputs because their data systems capture delivery, not change.

08

What are inputs in a logic model?

Inputs in a logic model are the resources a program invests to make its activities possible. Common inputs are funding (grants, contracts, philanthropic dollars), staff time (FTEs and roles), materials (curriculum, kits, supplies), facilities (classroom or clinic space), partner commitments (referral pipelines, employer agreements), and technology (software platforms and licenses). Inputs answer the question: what did we spend or commit to make this program run?

09

What are outputs in a logic model?

Outputs in a logic model are the countable products of program activities. Common outputs are participants enrolled, sessions delivered, hours of service provided, completion rates, and assessments administered. Outputs answer the question: how much of the program got delivered? They confirm that activities happened. They do not, on their own, prove that anything changed for participants. That is what outcomes are for.

10

How do you create a logic model?

Design backwards from impact. Start with the long-term change the program exists to create, then identify required outcomes, then design activities that produce those outcomes, then define outputs that prove activities happened, then list the inputs needed. Connect every component to a specific data field (a budget line, a milestone event, a count, a survey question, a follow-up) before program enrollment opens. The discipline of choosing what to track in each component is what separates a working logic model from a filed one.

11

What is logic model software?

Logic model software typically means diagramming tools (Lucidchart, Visio, Miro, Canva) or methodology platforms (MissionMet, DoView). These tools help draw the diagram. They do not collect, link, or analyze the data that tests it. Sopact Sense is a different category of tool: a data collection and analysis platform where the logic model becomes the schema, with each component connected to a survey question, a tracked count, or a follow-up.

12

What is a logic model template?

A logic model template is a pre-structured document (in Word, PowerPoint, or PDF) with the five columns already laid out, ready to be filled in. Most templates are compliance artifacts. They produce a tidy diagram for a grant application and nothing more. A logic model template designed for measurement connects each column to a data field, so the template becomes a schema for what the program will track and analyze.

13

What are logic model assumptions?

Logic model assumptions are the conditions that have to be true for the program to work as drawn. Examples: employer partners will engage with graduates; participants will commit the required time; the curriculum will be culturally appropriate; the labor market will remain stable. Most programs list assumptions as a side note and never check them. Strong programs write down each assumption next to the relevant arrow and update the diagram when an assumption breaks.

14

What is the difference between a logic model and a theory of change?

A logic model describes a program: what goes in, what gets done, what gets produced, what changes. A theory of change explains why the program should work: the causal mechanisms, the assumptions, and the contextual conditions that have to hold. Logic models are compact and serve funder communication well. Theories of change are richer and serve internal program learning. Most strong programs use both, built from the same data so the diagram and the explanation stay aligned.

Working Session

Bring your logic model. Leave with a measurement system.

Walk in with the diagram you have, the survey you are running, and the report your funder wants. Walk out with a single connected design where the model, the data, and the report use the same participant ID and the same outcome questions. 60 minutes, no pitch deck.

  • Bring Your current logic model and one survey you are using.
  • Leave with A draft data schema connected to each component of the diagram.
  • Format 60 minutes, video call, working session, no slide deck.
01 Define Logic model becomes the data schema, every component connects to a tracked field.
02 Collect One participant ID across intake, exit, and follow-up. Surveys link automatically.
03 Report Outcomes traceable to participants. Funder reports generated, not reconstructed.