Evaluating grant programmes — understanding whether funding is achieving its intended impact — is one of the most important and underinvested practices in philanthropy. Many foundations make grants, collect reports, and assume they know whether they're making a difference. Few have the evidence infrastructure to actually know. Building rigorous, proportionate evaluation into grant programmes is essential for foundations that genuinely want to learn and improve.
Accountability: funders are responsible to their communities, their founders' intentions, and — in the case of foundations using tax exemptions — the public. Evaluation provides evidence that philanthropic resources are being deployed responsibly.
Learning: evaluation surfaces what works, what doesn't, and why — enabling foundations to make better decisions over time. Without evaluation, grantmaking is essentially guesswork.
Resource allocation: with finite resources and multiple competing priorities, funders must make choices about where to invest. Evidence about what produces outcomes is essential for these choices.
Grantee development: evaluation findings shared with grantees help them understand what's working and improve their programmes. Good evaluation serves grantees as well as funders.
Evaluation in philanthropy ranges from simple activity reporting to sophisticated randomised controlled trials. The right approach depends on the:
- Type of grant: short-term project vs. long-term systems change
- Causal claims: simple activity delivery vs. complex behaviour change
- Resources available: a $5,000 grant can't support expensive evaluation
- Stage of programme: early-stage innovation vs. established programmes
Logic models
A logic model maps the causal chain from inputs through activities to outputs, outcomes, and impact:
- Inputs: resources (funding, staff, volunteer time, facilities)
- Activities: what the programme does (workshops, counselling sessions, advocacy campaigns)
- Outputs: what is directly produced (number of sessions, people reached, materials distributed)
- Outcomes: changes in knowledge, skills, attitudes, behaviour, or circumstances
- Impact: longer-term changes at population or systems level
Logic models are foundational planning and evaluation tools. They make explicit the assumptions underlying a programme — enabling critical examination of whether those assumptions are justified.
Theory of change
A theory of change is a more detailed causal model — specifying not just the chain from inputs to outcomes, but the mechanisms of change: why and how activities produce outcomes. A good theory of change:
- Identifies the specific problem being addressed
- Specifies who the programme serves
- Articulates how programme activities create change (the "so that" logic)
- Identifies assumptions that must hold for change to occur
- Describes the conditions required for success
Theories of change are more useful than logic models for complex programmes where causal mechanisms are not self-evident.
Outcomes frameworks
Outcomes frameworks specify a set of outcomes relevant to a programme area, along with indicators for measuring each outcome. Examples:
- Wellbeing frameworks: Te Ara Hou (NZ) or equivalent — specifying dimensions of community or individual wellbeing
- Population-level outcomes: school readiness, economic participation, crime rates
- Programme-specific outcomes: knowledge gain, skill development, behaviour change
Using a shared outcomes framework enables comparison across programmes and aggregation of data.
Developmental evaluation
Developmental evaluation (DE) is an evaluation approach designed for complex, evolving programmes where the path is not yet known. Unlike traditional summative evaluation (which assesses at the end whether a programme worked), DE:
- Supports real-time learning and adaptation
- Works alongside programme designers rather than independently assessing them
- Asks questions suited to the current stage of development
- Treats uncertainty and adaptation as features, not problems
DE is particularly suited to innovation grants, systems change initiatives, and early-stage programmes where the right approach is still being figured out.
Contribution analysis
For complex social programmes where multiple factors influence outcomes, determining causation is very difficult. Contribution analysis asks a more modest question: is there plausible evidence that the programme contributed to the observed changes, given what else was happening? This is more realistic than causal attribution for most community programmes.
Randomised controlled trials (RCTs)
RCTs — where participants are randomly assigned to receive a programme or not — provide the strongest evidence of causation. However, they:
- Are expensive and time-consuming
- Require sufficient scale (many participants)
- May not be ethical (withholding a beneficial programme from a control group)
- Are not appropriate for most philanthropic programmes
RCTs are most appropriate for testing specific, replicable interventions at scale — typically health or education programmes that have been developed through earlier research phases.
Evaluation should be proportionate to the grant — both in terms of rigour and cost. A reasonable rule of thumb: spend 5-10% of grant value on evaluation. For a $10,000 grant, $500-1,000 is appropriate for evaluation; for a $1 million grant, $50,000-100,000 might be reasonable.
Small grants: pre/post surveys, participant feedback, activity tracking. Simple, low-cost, adequate for activity-level accountability.
Medium grants: logic model with indicators, regular progress data, end-of-project evaluation report. Moderate investment; balances cost with evidence value.
Large, multi-year grants: formal evaluation by external evaluator, theory of change, mixed-methods data (quantitative + qualitative), contribution analysis. Significant investment justified by grant scale.
System change grants: developmental evaluation, long-term tracking, field-level data, policy environment monitoring.
Evaluation requirements can be a significant burden on small grantees. Good evaluation design:
- Uses routinely collected data where possible (rather than requiring new data collection)
- Collects data that serves grantees (not just funders)
- Aligns evaluation with grantee's own learning needs
- Provides evaluation capacity support (funding, expertise)
Funders who impose disproportionate evaluation requirements on small grantees signal distrust and drain resources from programme delivery.
Invest in evaluation infrastructure: dedicated evaluation staff, shared evaluation tools, and common outcomes frameworks enable better evaluation at lower cost per grant.
Distinguish learning evaluation from accountability evaluation: evaluation that primarily answers "did it work?" is backward-looking. Evaluation that primarily asks "what are we learning?" enables real-time improvement. Both are valuable; distinguish between them.
Share evaluation findings: evaluations that sit in filing cabinets don't contribute to sector learning. Publishing evaluation findings — including failures — adds value to the field.
Evaluate the foundation's own work: not just grantee programmes, but the foundation's grantmaking strategy, processes, and relationships. Are your selection criteria producing strong grantees? Is your theory of change holding up? Is your process accessible?
Tahua's grants management platform supports evaluation-focused grantmaking — with integrated outcomes tracking, logic model documentation, indicator management, and the reporting tools that help foundations build rigorous evaluation into their grant programmes.