Ask a room of grant managers whether their grantees submit outcome reports, and most hands go up. Ask how many of those reports are genuinely useful — how many have actually changed how the programme operates — and the room gets quiet.
Outcome reporting is one of the most talked-about and least-solved problems in the grants sector. Grantees submit documents that meet the letter of the requirement. Funders receive them, file them, and move on. Nobody learns much. The next round starts. The cycle repeats.
This isn't cynicism. It's an accurate description of how most outcome reporting works in practice, and it happens because outcome reporting is designed as a compliance mechanism rather than a learning mechanism. Changing that requires rethinking what you're asking for, and why.
Not all evidence is equal, and not all programmes need the same depth of evidence. The Outcome Evidence Hierarchy gives you a framework for thinking about what you're actually measuring, and what that measurement is worth.
Outputs are what was produced. Events held, participants reached, documents created, services delivered. Outputs are easy to count, easy to verify, and almost entirely useless as evidence that anything changed.
"We ran 12 workshops with 340 participants" tells you the activity happened. It tells you nothing about whether the activity made any difference. Yet output counting is the dominant mode of reporting in the grants sector. Most reporting templates are essentially output logs dressed up as outcome evidence.
Outputs matter — they're the mechanism through which change happens — but they are not outcomes. A funder who reports to their board that "our grants supported 12,000 participants this year" is reporting on activity, not impact.
Outcomes are what changed as a result of the activity. Skills gained, behaviours changed, relationships formed, services accessed, knowledge improved. Outcomes are harder to measure than outputs, but they're what you actually funded the project to produce.
The key question for outcome evidence is: compared to what? A report that says "participants reported improved confidence" is better than a participant count, but it's still weak evidence if there's no baseline. Improved compared to when? Compared to whom?
Good outcome evidence doesn't require a randomised control trial. It requires:
- A clear statement of what was expected to change, specified before the project started
- A measurement approach (survey, assessment, observation) applied at the start and end of the project
- An honest account of what changed, what didn't, and what you can't attribute to the grant
Impact is the sustained, population-level change that multiple interventions contribute to over time. Individual grants rarely produce impact directly — they contribute to it, alongside other interventions, over longer timeframes than a typical grant cycle.
Funders who claim that a $50,000 grant "reduced youth unemployment" are almost certainly overstating the evidence. That's fine to aspire to, but it's not what a two-year project report can credibly claim.
Knowing where your programme sits on this hierarchy — and being honest with your board and your own funders about it — is the foundation of credible outcome reporting.
The most common reason outcome reports are useless is that nobody defined what success looked like before the grant was approved. If the grant contract says "support young people in the community," you can't meaningfully evaluate whether that happened. You haven't defined which young people, what kind of support, or what "worked" would look like.
Outcome reporting starts at grant design. Before approving a grant, agree on:
The theory of change. If we fund this activity, what do we expect to happen, for whom, and through what mechanism? This doesn't need to be a formal logic model — it can be two sentences. "We're funding these workshops because we believe participants who complete them will be better equipped to manage their household finances. We'll measure this by comparing pre/post scores on a financial literacy assessment."
The measurement approach. How will the grantee know if the outcome happened? Pre/post surveys, case file review, administrative data, participant interviews? Agree this upfront. If you leave it to the grantee to decide what evidence to collect, you'll get inconsistent, incomparable reports.
The baseline. What's the starting point? You can't measure change without knowing where things stood at the beginning. For some programmes, baseline data exists (administrative records, prior surveys). For others, the grantee needs to collect it as part of project startup. Either way, it needs to be captured before the intervention, not after.
A well-designed final report template for a grants programme asks the following:
About the outcome:
- What outcome did you expect to see? (Reference back to the agreed theory of change)
- What did you actually observe?
- What evidence do you have? (Attach the data, survey results, or case records)
- How confident are you that the grant contributed to this outcome, vs other factors?
About what didn't work:
- What didn't go as planned?
- What would you do differently if you ran this project again?
- What did you learn that you didn't know at the start?
About sustainability:
- Will this outcome persist after the grant period? Why or why not?
- What would need to happen for this to scale or continue?
Notice that "how many participants did you reach" is not on this list. Participant counts belong in progress reports, not outcome reports. By the time you're asking about outcomes, you want to know what changed — not what happened.
Grant managers worry a lot about attribution — the question of whether the change observed was caused by the grant, or would have happened anyway, or happened because of other factors.
This worry is often used as a reason not to ask about outcomes at all, on the grounds that you can't prove causation without a control group. That's the wrong conclusion to draw.
You don't need to prove causation to make useful funding decisions. You need reasonable evidence of contribution. If participants in a leadership programme show substantial improvement in a validated assessment of leadership capability, and the programme was the primary activity they undertook during that period, you have reasonable grounds to attribute the improvement to the programme. Not proof, but reasonable grounds.
The honest formulation is: "Based on [evidence], we believe this grant contributed to [outcome]. We can't rule out other contributing factors, which include [X and Y]." That's credible and useful. "This grant reduced recidivism" — without evidence or qualification — is neither.
The biggest gap in most grants programmes isn't the collection of outcome data — it's what happens to it after it arrives.
Individual grantee reports are useful for accountability (did this grant achieve what it set out to do?) but not for learning (are grants in this category generally working?). For programme-level learning, you need to aggregate across grants.
This means your reporting template needs to be designed for aggregation. Free-text narrative is not aggregable. Structured data is.
For example: instead of asking grantees to "describe the outcomes achieved," ask them to complete a structured table:
| Outcome | Measurement method | Baseline | End-point | Change |
|---|---|---|---|---|
| Financial literacy score | Pre/post survey (10-point scale) | 5.2 average | 7.8 average | +2.6 |
With this structure across 40 grantees, you can calculate average change across the portfolio, identify which approaches are producing the strongest outcomes, and present credible aggregate evidence to your board.
Without this structure, you have 40 documents describing 40 different outcomes in 40 different ways, and the most you can say is "our grantees reported positive outcomes."
Most grant-making organisations are themselves funded — by government, by endowments, by charitable donors, by other foundations. Those funders have their own outcome expectations, and they're increasingly asking for evidence rather than activity counts.
The shift toward evidence-based reporting at the funder level is filtering down to grant managers. Funders who can demonstrate outcome evidence at the portfolio level — not just individual grant stories — are better positioned for their own funding conversations. Building the infrastructure to aggregate outcome evidence across your portfolio is an investment in your own institutional credibility, not just in your grantees' accountability.
You don't have to redesign your entire reporting framework at once. Start with one grant round, one category, or one group of grantees. Define the outcomes, agree the measurement approach, and ask for structured evidence in the final report.
Evaluate what comes back. Is it what you hoped for? Where did grantees struggle? What questions produced useful answers and which produced noise?
Use that learning to refine the template before you apply it more broadly. Outcome reporting at scale is hard. Outcome reporting in a pilot cohort, where you're iterating based on real feedback, is tractable.
The sector's reporting problem isn't that grantees don't want to share what they learned. It's that funders haven't asked the right questions clearly enough, or given grantees the tools to answer them. That's a design problem. And design problems are solvable.
Outputs are what was done — workshops run, people reached, resources produced. Outcomes are what changed as a result — skills gained, behaviours shifted, problems solved. Most grant reports focus on outputs because they're easier to count, but outcomes are what funders are actually trying to achieve.
A proportionate approach works well: a short participant survey of three to five questions, a brief grantee narrative about what changed, and one or two proxy indicators that signal the outcome is occurring. Research-grade evaluation isn't necessary — reasonable evidence that something changed is sufficient.
Useful reporting answers three questions: what changed, for whom, and what does that tell us about whether the programme is working? It's designed around decisions the funder needs to make — not around demonstrating activity. When reporting is designed for learning rather than accountability, both funder and grantee get more value from the process.