Outcome reporting is the post-award process through which funders collect data from grantees about what their funded activities achieved. Done well, it generates insights that improve programme design, demonstrates funder impact, and builds accountability with grantees and stakeholders. Done poorly, it produces reports that nobody reads, burden grantees with compliance overhead, and generate data that's too inconsistent to aggregate.
This guide covers how to design outcome reporting that actually works.
Most grant outcome reporting fails for a predictable set of reasons:
Reporting requirements designed by funders for funders. Report templates are usually built around what funders want to know, not what grantees can practically report. The result is forms that ask for data grantees don't collect, in formats they don't use.
No feedback loop. Funders collect outcome reports, file them, and never tell grantees what they found or how the data was used. From the grantee's perspective, the report goes into a black hole. This undermines motivation to report accurately.
Inconsistent data. Each grant programme has different outcome indicators, different scales, different definitions. The data can't be aggregated across programmes or compared over time, limiting its usefulness for learning.
Reporting burden. Complex, lengthy reporting forms with many required fields create significant burden for grantees — particularly smaller organisations with limited administrative capacity. High burden leads to poor quality responses, late submissions, and resentment.
What gets reported vs what happened. Grantees know what funders want to hear. Reports that have no space for honest account of what didn't work, what was harder than expected, and what the programme would do differently next time generate sanitised success stories rather than genuine learning.
Start with what you'll use the data for. Before designing a report template, ask: if grantees give us good data on this, what will we do with it? If the answer is "file it," you don't need that data. Design reports around decisions you'll actually make.
Ask for what grantees can provide. The best outcome data is data that grantees already collect — participant numbers, session counts, outputs produced, client survey results. Asking grantees to collect data specifically for the funder's report that they wouldn't otherwise collect is expensive for them and unreliable for you.
Be proportionate. The reporting burden should match the grant size. A 15-question report with required evidence attachments for a $1,000 grant is inappropriate. A brief narrative plus participant numbers for a small grant; a more structured report with quantitative data for a large grant.
Make the connection explicit. Tell grantees why you're asking for specific information — what you'll use it for, how it connects to the programme's goals. Grantees who understand the purpose of data collection are more likely to collect it carefully.
Include space for honest reflection. A report template that only asks "what did you achieve?" will only get success narratives. Including questions like "what worked less well than expected?" and "what would you do differently?" creates space for genuine learning — and grantees who report honestly deserve to be treated as partners, not penalised for transparency.
Provide feedback. After reviewing reports, tell grantees what you found. Not a long analysis — even a brief note confirming what was most valuable about their report, or what you found interesting in the data, signals that someone read it and it mattered.
Outcome indicators are the specific, measurable signals that the programme is achieving its intended outcomes. Effective outcome indicators are:
Connected to the programme's theory of change. If the theory of change is that community events build social connection, the outcome indicator should measure social connection — not just event attendance.
Measurable with available data. Grantees should be able to provide the data from what they already know or collect. Indicators that require expensive new data collection are impractical.
Comparable across grantees. If you want to aggregate data across the programme portfolio, outcome indicators need to be defined clearly enough that different grantees measure them the same way.
Not just outputs. Output indicators (number of events, number of participants) are easy to collect but don't tell you about change. Outcome indicators (percentage of participants who reported improved knowledge, satisfaction ratings, self-assessed wellbeing) are closer to change, though still proxies.
Defined with scale or range. "Improved" without a reference scale is ambiguous. "8 out of 10 participants reported improved confidence on a 5-point scale" is a specific claim.
Short is better. The best report templates are short — 4-6 key questions for mid-sized grants, a single page for small grants. Length doesn't correlate with quality.
Pre-populate what you know. Don't ask grantees to re-state the grant amount, the approved purpose, or information that's already in the grant record. Pre-populate the report form with this information so grantees only provide what they actually know.
Use structured fields for quantitative data. Participant numbers, event counts, and other quantitative data should be in structured fields — not embedded in narrative text — so they can be extracted and aggregated.
Use narrative fields for qualitative data. Context, stories, explanations, and reflections are better in free-text narrative fields than in forced-choice formats.
Include a financial section. A simple budget vs actual summary — how much was received, how much spent by category, whether there's an unexpended balance — is important for financial accountability without requiring a full audit.
Make it submittable online. Paper reports are difficult to file, impossible to aggregate, and inconvenient for grantees. Online report forms are standard for any programme managing more than a handful of grants.
Collecting outcome data is only worthwhile if it's used. Uses include:
Programme learning. Aggregated outcome data — what outcomes are being achieved, at what scale, with what patterns across grantees — informs programme design decisions.
Portfolio reporting. Board reports and annual reports that demonstrate programme impact draw on aggregated outcome data from grantee reports.
Grantee conversations. Individual grant outcome reports are the basis for conversations with grantees about what's working, what's not, and what might be done differently.
Funder accountability. Outcome data is the evidence base for funders' own accountability — to boards, to donors, to government, and to the communities they serve.
Tahua provides configurable outcome reporting templates, structured data collection, and portfolio-level outcome aggregation that turns grant reports into programme intelligence.