Grant programme evaluation asks a different question from grantee outcome reporting. While grantee reports ask "what did this organisation achieve?", programme evaluation asks "is our grant programme achieving what we intended, and should we continue it, change it, or stop it?" Funders who only assess grantee performance without ever evaluating their own programmes are missing the most important accountability question: are we actually making things better?
Programme designs go stale. A grant programme designed five years ago may have reflected the community needs of five years ago. Without evaluation, it can continue long past the point where its design is relevant to current community priorities.
Unintended consequences. Grant programmes frequently have effects their designers didn't intend — some beneficial, some harmful. Evaluation surfaces these effects and allows programmes to be adjusted.
Resource allocation questions. Evaluation evidence supports decisions about whether to continue, expand, modify, or end a programme — decisions that have significant implications for both funders and the communities they serve.
Funder accountability. Funders are accountable to the communities they serve and (where relevant) to donors, government, or the public. Systematic evaluation is part of demonstrating that accountability.
Sector learning. Evaluated grant programmes produce knowledge that benefits the philanthropic sector. When funders share evaluation findings, the sector learns collectively what approaches work.
Process evaluation examines how the grant programme operates: how accessible is the application process, how consistent and fair is assessment, how well does the funder support grantees, what is the quality of funder-grantee relationships? Process evaluation doesn't assess outcomes — it assesses whether the programme is being run well.
Outcome evaluation examines whether grantees are achieving the outcomes the programme intended: what change is occurring in communities as a result of funded activities? This typically aggregates data across the grant portfolio rather than assessing individual grantees.
Impact evaluation attempts to determine causation — whether the outcomes observed are attributable to the grant programme rather than other factors. This is methodologically challenging and usually only feasible for large, well-resourced programmes. Most philanthropic programmes can realistically aspire to contribution rather than attribution.
Developmental evaluation is designed for complex, emergent programmes where the right approach is still being discovered. Rather than evaluating against fixed criteria, a developmental evaluator embeds within the programme, helps the funder understand what's working and what's not, and supports ongoing adaptation. This is particularly valuable for new programmes or programmes operating in complex adaptive systems.
Summative evaluation assesses a completed programme — was it effective? What did it achieve? Should it be replicated? This is retrospective and backward-looking.
Formative evaluation is ongoing and forward-looking — how can we improve this programme while it's running? Formative evaluation produces recommendations that can be implemented while the programme continues.
Relevance: Is the programme addressing a real and significant community need? Is it the right approach given what we know about what works in this area?
Reach: Is the programme reaching the communities it was designed to serve? Who is being left out, and why?
Effectiveness: Are grantees achieving the outcomes the programme intended? What outcomes are being achieved that weren't anticipated?
Efficiency: Is the programme achieving its goals at a reasonable cost? Are there ways to achieve similar outcomes with less resource?
Sustainability: Are the changes produced by the programme likely to persist? Are organisations and communities building capacity that will outlast the grant period?
Equity: Are the benefits of the programme being distributed equitably? Are marginalised communities benefiting proportionally?
Learning: What has the funder learned from this programme that should inform future design? What would they do differently?
Attribution — proving that a programme caused observed outcomes — is rarely achievable in community settings where many factors influence outcomes simultaneously. Contribution analysis is a more realistic approach: demonstrating that the programme made a meaningful contribution to observed outcomes, even if it wasn't the only cause.
Contribution analysis involves:
1. Articulating the theory of change — the logic by which the programme expects to contribute to outcomes
2. Collecting evidence about whether the programme was implemented as designed
3. Collecting evidence about whether the expected outcomes occurred
4. Examining alternative explanations for the observed outcomes
5. Assessing the strength of the evidence for the programme's contribution
This approach doesn't require control groups or randomisation — it uses a combination of quantitative outcome data, qualitative evidence of programme processes, and systematic analysis of alternative explanations.
Developmental evaluation (DE), developed by Michael Quinn Patton, is increasingly used by funders working in complex community settings where outcomes are not fully predictable and the right approach is still being developed.
Key features of developmental evaluation:
- The evaluator works alongside the programme team, not as an external assessor
- The focus is on learning and adaptation, not accountability and judgment
- The evaluation framework evolves with the programme
- Real-time feedback informs programme decisions
- Complexity is embraced rather than simplified away
DE is particularly valuable for:
- New programmes where the model is not yet established
- Programmes working with complex social problems (addiction, poverty, family violence)
- Programmes operating in culturally distinctive contexts where standard evaluation approaches may not fit
- Funders who genuinely want to learn and adapt, not just assess
Participatory evaluation involves community members — including grantees and their clients — in the evaluation process itself, not just as data sources but as evaluators. This is particularly important when:
- The communities being evaluated have been historically marginalised and have experienced extractive research practices
- Community members have knowledge about what works that external evaluators don't
- Building community evaluation capacity is itself a goal
- The funder wants to ensure evaluation serves community interests, not just funder interests
In New Zealand, participatory evaluation aligned with kaupapa Māori research principles is particularly relevant for programmes working with Māori communities. Kaupapa Māori evaluation uses Māori concepts, values, and methods — centring te ao Māori in both what is measured and how it is measured.
For funders with limited evaluation budgets, practical evaluation design involves:
Using existing data. What data do grantees already collect? What data is available from government administrative systems? Using existing data rather than generating new data reduces evaluation costs significantly.
Sampling. Full evaluation of every grantee in a portfolio is rarely necessary or feasible. Evaluating a representative sample — or evaluating in depth a smaller number of grantees — can generate sufficient insight at lower cost.
Learning questions, not measurement questions. Evaluation designed around specific questions the funder wants to answer ("are we reaching rural communities?", "are our grants producing lasting change?") is more useful than comprehensive measurement of everything.
Building evaluation into programme design. Evaluation planned at programme design is less expensive and more useful than evaluation bolted on retrospectively. Data collection systems built into the grantee reporting process produce evaluation-ready data without additional data collection effort.
Sharing evaluation costs. Where multiple funders are investing in the same sector, shared evaluation — co-commissioned and co-funded — distributes costs and produces more robust evidence than each funder evaluating independently.
Tahua's grants management platform is built to support programme evaluation — with portfolio-level reporting, grantee data aggregation, and the management information that funders need to understand whether their programmes are working.