Demonstrating social impact is increasingly central to the philanthropic relationship — funders want to know their investment is achieving change, and grant recipients want to understand and communicate their contribution to community wellbeing. But impact measurement is contested, complex, and resource-intensive. Understanding the landscape of approaches — and what funders actually find useful — helps organisations invest wisely in measurement.
For grantees
For funders
Impact measurement starts with theory of change — a clear articulation of:
- The problem being addressed
- The intervention or activities
- The outputs (immediate products of activity)
- The outcomes (changes in behaviour, capacity, or condition)
- The impact (longer-term, wider changes attributable to the programme)
- The assumptions underlying the causal chain
Without a clear theory of change, it's impossible to measure impact coherently. Different measurement approaches are suited to different points in the theory of change.
Outputs measurement
Counting what you do:
- Number of sessions delivered
- Number of participants
- Number of meals provided
- Hours of service
Outputs are easy to measure but don't demonstrate change. Funders who only require output reporting are getting an incomplete picture.
Outcomes measurement
Measuring change in participants or communities:
- Change in knowledge (pre-post survey)
- Change in behaviour (self-report, observation)
- Change in condition (standardised assessment tools)
- Change in status (employment, housing, health)
Outcomes measurement is more meaningful but requires validated tools and consistent administration.
Validated assessment tools
Many domains have validated, standardised measurement instruments:
- Wellbeing: WHO-5, PHQ-9 (depression), GAD-7 (anxiety), PWB
- Child development: Ages and Stages Questionnaire (ASQ), SDQ
- Social connection: UCLA Loneliness Scale, Social Capital measures
- Employment: employment status, hours, wage level
- Academic: NAPLAN equivalents, reading assessment tools
Using validated tools enables benchmarking against norms and comparison across programmes.
Social Return on Investment (SROI)
SROI converts social outcomes to monetary values — assigning financial proxies to non-financial outcomes to calculate a return:
- SROI ratio: for every $1 invested, £X of social value is created
- Requires: identifying outcomes, finding financial proxies, applying attribution and deadweight adjustments
Criticisms of SROI:
- Financial proxies are often arbitrary or contested
- Can create false precision in inherently uncertain measurements
- Time and cost intensive
- Not well-suited for comparison across programmes (depends on proxy selection)
SROI is most useful for internal decision-making and stakeholder communication rather than inter-programme comparison.
Contribution analysis
Rather than claiming attribution (this programme caused this outcome), contribution analysis claims a plausible contribution:
- What was the programme's contribution to observed change?
- What else might have contributed?
- How confident can we be that the programme mattered?
This is more honest than strict attribution while still making a case for programme contribution.
Most Significant Change (MSC)
A qualitative approach that collects stories of change and uses participatory processes to identify the most significant:
- Field workers collect stories of change from participants
- Stories are shared up through the organisation
- Representative stories are selected through group discussion
MSC captures qualitative change that quantitative tools miss.
Randomised Controlled Trials (RCTs)
The gold standard for causal evidence — comparing outcomes for programme participants vs a control group:
- Provides the strongest evidence of causation
- Expensive and complex to conduct
- Not feasible for most community programmes
- Ethical constraints on denying access to beneficial programmes
Only a small fraction of social programmes have RCT evidence — most rely on weaker study designs.
Not all funders want comprehensive impact measurement
Gaming trusts typically want basic output reports — number of people served, how money was spent.
Community foundations want more narrative — what changed, what did people experience?
Large foundations increasingly want outcomes data with validated tools and theory-of-change alignment.
Government contracts often require specific output and outcome metrics defined in the contract.
Match your measurement investment to your funder's actual requirements — over-investing in complex measurement for a funder who only needs a one-page report wastes resources.
Small organisations often feel overwhelmed by impact measurement demands. Practical approaches:
- One or two core outcome measures (not comprehensive)
- Simple survey tools (Google Forms)
- Validated tools where available (free to use)
- Consistent administration (measure the same way each time)
- Story capture alongside numbers (qualitative complement to quantitative)
Good measurement is consistent, not comprehensive. Measuring one thing well is more valuable than measuring twenty things poorly.
NZ and Australia
International
Measuring only the positive
Strong impact measurement acknowledges what didn't work as well as what did. Funders increasingly value learning organisations over those that only report success.
Claiming attribution without evidence
Saying "our programme reduced unemployment in our community" without controlling for other factors is overclaiming. Use language of contribution and plausibility.
Survey fatigue
Participants who are surveyed too frequently stop responding honestly. Minimise survey burden and explain why data is collected.
Measuring for measurement's sake
Measurement that doesn't feed learning or decision-making is wasted effort. Ask: what will we do with this data?
Tahua's grants management platform supports funders and grant recipients in impact measurement — with customisable outcome tracking, survey integration, validated assessment tool support, portfolio-level impact dashboards, and the reporting tools that help grantmakers and grantees demonstrate and learn from their social impact.