Social Impact Measurement Frameworks: A Guide for Grantmakers

Social impact measurement is one of the most discussed and least resolved challenges in philanthropy. Every funder wants to know whether their grants are making a difference; few have clean, reliable ways of knowing. A proliferation of measurement frameworks — Theory of Change, Logic Models, Social Return on Investment, Balanced Scorecard, Most Significant Change, and many more — has produced more complexity than clarity. This guide explains the major frameworks, when each is appropriate, and how funders can apply measurement thinking practically without creating excessive burden for grantees.

Why measurement frameworks matter

Without a framework, impact measurement is ad hoc — funders collect whatever information grantees happen to have, draw idiosyncratic conclusions, and accumulate data that can't be compared across grants or over time. A framework provides:

  • A shared language between funder and grantee
  • Clarity about what will be measured before the grant starts
  • A structure for evaluating grantee applications (does their approach have a coherent theory of how it produces impact?)
  • A basis for portfolio-level analysis (what types of outcomes are grants across the portfolio producing?)

Frameworks are tools, not ends. The goal is genuine understanding of whether grants are producing community benefit — not compliance with a framework's terminology.

Theory of Change

What it is: A Theory of Change (ToC) is a visual and narrative explanation of how a programme's activities lead to outcomes, and ultimately to long-term impact. It maps the causal pathway: inputs → activities → outputs → outcomes → impact.

Key components:
- Inputs: Resources — money, staff, volunteers, equipment
- Activities: What the organisation does with those inputs
- Outputs: The direct products of activities (number of sessions run, people trained, meals delivered)
- Outcomes: Changes in the people or communities served (skills improved, incomes increased, wellbeing enhanced)
- Impact: Longer-term, broader changes in communities or systems

When it works well: Theory of Change is most useful for complex interventions where the causal mechanism isn't obvious, or where multiple pathways might lead to the intended outcome. It's particularly valuable at the programme design stage — developing a ToC forces clarity about what you're trying to achieve and how.

Limitations: Theories of Change can be created retrospectively to justify existing practice rather than to guide it. They can be overly linear in domains where change is complex and non-linear. And they require substantial assumptions about causality that are often not empirically validated.

For grantmakers: Requiring applicants to articulate a theory of change is a reasonable assessment requirement for larger or more complex grants. It tests whether organisations have thought through how their activities lead to outcomes. But be proportionate — requiring a detailed ToC from a community group running a small sports programme is excessive.

Logic Models

What it is: A Logic Model is similar to Theory of Change but typically more structured and visual — a table or diagram showing inputs, activities, outputs, short-term outcomes, and long-term outcomes in parallel columns.

Difference from ToC: Logic Models tend to be more systematic and grid-like; Theories of Change tend to be more narrative and causal. Logic Models are better for describing a programme clearly; ToCs are better for explaining why the programme will work.

When it works well: Logic models are excellent for programme planning and for communicating a complex programme clearly to external audiences (funders, boards, communities). They're also useful for structuring monitoring and evaluation plans — each row in the model suggests something to measure.

Limitations: Logic models can imply more certainty than exists about how activities lead to outcomes. They work better for linear, well-understood programmes than for complex adaptive systems.

For grantmakers: Logic models are a reasonable thing to request from applicants who describe complex, multi-component programmes. A simple one-page logic model demonstrates programme clarity. Avoid requiring elaborate models from small, community-based organisations.

Social Return on Investment (SROI)

What it is: SROI is a framework for calculating the financial value of social outcomes — expressing impact as a ratio (e.g., $4 of social value for every $1 of investment). It attempts to put monetary values on outcomes that aren't typically traded in markets (improved mental health, reduced loneliness, environmental restoration).

How it works:
1. Identify outcomes produced by the programme
2. For each outcome, assign a financial proxy value (the price of equivalent market alternatives)
3. Calculate the total value of outcomes produced
4. Divide by the investment to get the SROI ratio

When it works well: SROI can be compelling for advocacy — demonstrating to governments or corporate donors that community investment produces economic returns. It's most credible when the financial proxies are robust and the outcomes are well-evidenced.

Limitations: SROI ratios are extremely sensitive to the choice of proxy values, which are inherently contestable. The appearance of precision (a ratio of 3.7:1) can be misleading when the underlying assumptions are uncertain. SROI has been criticised for reducing complex social value to financial terms and for being easy to game.

For grantmakers: SROI is rarely a sensible requirement for grantees unless the funder specifically needs economic return arguments for advocacy purposes. It's expensive to conduct rigorously and produces results that are hard to compare across organisations.

Most Significant Change (MSC)

What it is: Most Significant Change is a participatory approach to impact monitoring where community members and stakeholders describe stories of significant change they've observed or experienced. These stories are collected, reviewed, and selected by programme staff, management, and eventually funders — creating a structured conversation about what kinds of change matter most.

How it works:
1. Programme participants answer: "In the past period, what was the most significant change for you or your community?"
2. Stories are collected and reviewed at each level of the organisation
3. Selected stories move up, with discussion about why they were chosen
4. Funders review the selected stories

When it works well: MSC is particularly valuable for complex programmes where outcomes are unexpected, multi-dimensional, or don't fit predetermined indicator frameworks. It centres the perspectives of the people who matter most — the communities served. It's also useful for values alignment — the stories that get selected reveal what the programme and funder consider important.

Limitations: MSC is time-intensive and produces qualitative evidence that is harder to aggregate and compare than quantitative data. It requires genuine commitment from both funders and grantees to the process.

For grantmakers: MSC can work well alongside standard quantitative reporting rather than replacing it. It's particularly appropriate for programmes in complex domains (mental health, cultural identity, community cohesion) where standard outcome metrics don't capture what's happening.

Developmental Evaluation

What it is: Developmental Evaluation (DE) is a methodology designed for complex, emergent programmes where the right approach is still being discovered. Rather than evaluating against predetermined criteria, a developmental evaluator embeds within the programme, helps teams understand what's working, and supports ongoing adaptation.

Key features:
- The evaluator works inside the programme, not as an external judge
- Evaluation is ongoing, not event-based
- The focus is on learning and adaptation, not accountability
- Complexity is embraced rather than simplified away

When it works well: DE is most useful for new programmes in complex domains, for scale-up of proven models in new contexts, or when the funder is genuinely committed to learning rather than just measuring.

Limitations: DE requires a long-term embedded evaluator, which is expensive. It produces learning, not proof — which may not satisfy accountability requirements. It requires genuine psychological safety for programme teams to be honest about what isn't working.

For grantmakers: DE is appropriate for significant multi-year investments in innovation or systemic change work. It's not appropriate for routine grant assessment.

Choosing the right framework

No single framework is appropriate for all situations. Useful questions:

What do you need to know? If you need to understand whether a programme is delivering as designed, a Logic Model and associated indicators work well. If you need to understand why a programme is or isn't working, Theory of Change plus qualitative research is better. If you're still discovering what the right programme looks like, Developmental Evaluation is appropriate.

What is the grantee capacity? Heavy measurement requirements are appropriate only where grantees have the capacity to meet them. Community groups shouldn't be required to conduct Social Return on Investment calculations.

What is the grant size and risk? Larger grants warrant more rigorous measurement. A $5,000 community grant needs a brief outcome description, not a Theory of Change process.

Are you measuring for learning or accountability? Learning-oriented measurement is more developmental, qualitative, and focused on what can be improved. Accountability-oriented measurement is more standardised, quantitative, and focused on whether commitments were met. Both have legitimate roles; knowing which you're doing shapes your approach.

Practical measurement for funders

For most community grantmakers, the practical approach involves:

  1. Requiring applicants to articulate outcomes — what will change for people and communities as a result of this grant?

  2. Asking how those outcomes will be measured — not requiring a specific framework, but expecting some thought about evidence

  3. Requiring quantitative reporting on outputs (reach, activities) and qualitative reporting on outcomes

  4. Building a portfolio picture — aggregating outcome data across grants to understand sector-level patterns

  5. Using evaluation for selected programmes — commissioning rigorous evaluation for significant programmes or pilots, rather than requiring it from all grantees


Tahua's grants management platform supports outcome tracking across grant portfolios — with configurable outcome frameworks, grantee reporting, and the analytics that help funders understand whether their investment is producing community benefit.

Book a conversation with the Tahua team →