Evaluating Grant Programmes: A Practical Framework for Funders

Most funders invest significant resources in assessing applications but relatively little in evaluating whether their funded programmes are actually working. Programme evaluation — assessing whether a grant programme is achieving its intended outcomes — is one of the highest-value learning investments a funder can make.

Why grant programme evaluation matters

Better investment decisions. Without evaluation, funders make subsequent investment decisions based on intuition and anecdote. With evaluation evidence, funders can make better decisions about where to invest, what approaches work, and what to change.

Accountability to beneficiaries. Funders hold charitable assets on behalf of the communities they serve. Accountability for how those assets are invested runs to beneficiaries — who deserve evidence that the funder's decisions produced the outcomes they intended.

Programme improvement. The primary value of evaluation for active programmes is improvement — identifying what's working, what isn't, why, and what changes would produce better outcomes. Evaluation that only produces a pass/fail verdict is less valuable than evaluation that generates useful learning.

Sector learning. The philanthropic sector suffers from insufficient knowledge-sharing about what works. Funders who publish their evaluation findings — including what didn't work — contribute to sector learning that benefits the whole field.

Types of programme evaluation

Developmental evaluation. Ongoing learning alongside an emerging programme — helping programme staff understand what's happening, why, and what to adjust. Best suited to new or experimental programmes where the right approach is still being discovered.

Formative evaluation. Assessment during a programme's operation to identify improvements that can be made while the programme is still running. Formative evaluation findings are used to adjust programme design.

Summative evaluation. Assessment of a programme's outcomes after it has run long enough to produce results. Summative evaluation answers whether the programme achieved its intended outcomes and whether it was worth the investment.

Impact evaluation. Rigorous assessment of the causal impact of the programme — what outcomes were produced that would not have occurred without the programme? Impact evaluation typically uses comparison groups or other methods to attribute observed changes to the programme rather than other factors.

Designing a useful evaluation

Start with the theory of change. A programme evaluation that doesn't test the theory of change is incomplete. Good evaluation design identifies the key causal claims in the theory of change — the assumptions that must hold for the programme to work — and tests whether they held.

Define evaluation questions. What are you trying to learn? Evaluation questions guide data collection and analysis. Common evaluation questions for grant programmes:
- Were the target beneficiaries reached?
- Did participants experience the intended short-term changes?
- Did funded organisations deliver the intended activities at the intended quality and scale?
- Did the programme contribute to the intended longer-term outcomes?
- Was the programme cost-effective relative to alternatives?
- What worked well and should be continued? What should be changed?

Choose appropriate methods. Different questions require different methods. Quantitative surveys and administrative data answer "how many" questions; qualitative interviews and focus groups answer "why" and "how" questions. Most useful evaluations use mixed methods.

Build in data collection from the start. Evaluations that try to reconstruct what happened after the fact produce weaker evidence than evaluations that plan data collection from the beginning. Baseline data collected before the programme starts is essential for measuring change.

Be proportionate. Rigorous randomised controlled trials are appropriate for some interventions at some points in their development. Most community grant programmes need evaluations that are good enough to answer the most important questions — not the most scientifically rigorous evaluation possible. Match evaluation investment to programme scale and maturity.

What data to collect

Applicant and grantee data. Application data describes who applies, who is funded, and what they propose to do. Grantee data (collected through progress and final reports) describes what they actually did and what outcomes they observed.

Administrative programme data. Application volume and conversion rates, assessment turnaround times, grant payment timelines, reporting compliance rates — programme administration data tells you how efficiently the programme is running.

Beneficiary data. For service delivery grants, the most important data is from the people the funded services are trying to help. Beneficiary surveys, case file analysis, and qualitative interviews produce evidence of whether the intended changes occurred.

Comparison data. Where possible, comparison data — from unfunded organisations, similar populations in other areas, or pre-programme baselines — helps attribute observed changes to the programme rather than other factors.

Using evaluation findings

Feed back into programme design. Evaluation findings should generate specific programme design changes. An evaluation that concludes "the programme is broadly working but eligibility criteria are excluding important community organisations" should produce a change to eligibility criteria.

Share with grantees. Grantees benefit from seeing aggregated evaluation findings — what the programme learned, what outcomes were produced across the portfolio. This builds sector knowledge and signals that the funder takes learning seriously.

Publish. Funders who publish programme evaluations — including honest accounts of what didn't work — contribute to sector knowledge. The grantmaking sector has too little published evaluation data. Publishing is a public good, even when the findings are uncomfortable.

Inform future strategy. Grant programme evaluations should feed into periodic strategic reviews. Evidence that a long-running programme has stopped producing the outcomes it was designed for is a case for programme redesign or retirement.


Tahua supports programme evaluation with configurable outcome frameworks, reporting data aggregation, and the audit trail depth needed to reconstruct programme history for evaluation purposes.

Book a conversation →