An outcomes framework is a structured way of describing what your grants programme is trying to change — and how you'll know if it's working. Done well, it gives your team, your grantees, and your governance a shared language for success. Done badly, it becomes a document no one reads and a reporting obligation no one can meet.
Most frameworks fail because they're designed to satisfy governance requirements rather than to guide real decisions. This guide takes the opposite approach: start with the decisions you need to make, and design the framework to inform them.
An outcomes framework describes the chain from your grant activity to the change you're trying to create. It typically includes:
Not every framework needs all five levels. For most grant programmes, outputs and outcomes are the useful part — inputs and activities are already described in applications, and impact is too long-term and multi-causal to attribute reliably to any single programme.
The most common failure in outcomes framework design is defining outcomes too broadly. "Stronger communities" or "improved wellbeing" aren't outcomes — they're aspirations. An outcomes framework needs to describe specific changes in specific populations.
Work through these questions:
Who are you trying to affect? Be specific. "Community organisations in the region" is too broad. "Small community organisations (under $500k turnover) in the region that are in their first five years of operation" is a population you can measure.
What do you want to change for them? Think about knowledge, skills, behaviour, relationships, or conditions. "Improved governance capability" becomes "governance volunteers who can confidently fulfil their legal obligations" which becomes "governance volunteers who report greater confidence in managing financial risk and meeting reporting requirements."
Over what timeframe? Some changes happen within the funded period; others take years. Be realistic about what's measurable when.
A theory of change is a narrative that explains why your grant activity should lead to the outcomes you're describing. It makes your assumptions visible and testable.
For a capacity-building grants programme, the theory of change might be:
Small community organisations often struggle with governance because their volunteer trustees lack training and support. When trustees receive targeted training and peer support, they gain the knowledge and confidence to fulfil their roles more effectively. Organisations with stronger governance are more sustainable, better able to manage risks, and better positioned to deliver impact for their communities.
This narrative identifies the mechanism — training and peer support — and the assumption — that knowledge gaps (rather than, say, lack of time or burnout) are the primary constraint on governance quality. If that assumption is wrong, the programme won't achieve its outcomes regardless of how well it's run.
Writing down your theory of change surfaces assumptions that might otherwise go unexamined. Test them with people who know the field before you finalise your framework.
Indicators are the specific, measurable signals that tell you whether your outcomes are being achieved.
For each outcome, define:
An example for the governance programme:
| Outcome | Indicator | Method | Timing |
|---|---|---|---|
| Trustees gain confidence in governance | % of training participants who report increased confidence | Post-training survey | At programme close |
| Organisations improve governance practice | % of organisations that have adopted at least two new governance practices | Follow-up survey | 6 months post-grant |
| Organisations become more sustainable | Organisation survival rate | Admin data | 24 months post-grant |
Keep your indicator set manageable. Four to six indicators per programme is usually enough. More than that and you're generating data no one has time to analyse.
Before you finalise your framework, answer this question: what decisions will this data inform?
If the answer is "board reporting," that's a governance function and the data needs to be summarised at a portfolio level.
If the answer is "improving programme design," you need to be able to disaggregate the data — to see which types of grantees are achieving outcomes and which aren't, so you can adjust.
If the answer is "making the case for continued funding," you need compelling evidence of impact that a non-specialist can understand.
Design your measurement approach for the decision, not for the framework. This prevents you from collecting data you never use.
An outcomes framework only works if it's embedded in how you run the programme:
In guidelines: Tell applicants what outcomes the programme is trying to achieve. Applications that clearly connect their proposed activity to those outcomes should score well; those that don't, shouldn't.
In assessment: Include a criterion for outcomes alignment in your scoring rubric. Assess how well the applicant's theory of change matches yours.
In grant conditions: Specify the outcomes indicators the grantee is expected to contribute to and how they'll report on them.
In reporting: Design your reporting template around your indicators. Ask for the data you need; don't ask for data you won't use.
In review: After each round, review your outcomes data. What are you learning? What's working and what isn't? Update the framework accordingly.
A framework that lives in a drawer isn't a framework — it's a document. The test is whether it changes what you do.
This article is part of the complete guide: What Great Grant Outcome Reporting Looks Like.