Monitoring and evaluation (M&E) answers the question that matters most in grantmaking: did the investment create the change it was intended to create? For funders, M&E provides accountability to donors and communities. For grantees, it provides evidence of impact and learning for programme improvement. Building M&E into grant programmes from the outset — not as an afterthought — is what distinguishes effective grantmaking from activity tracking.
These terms are often used interchangeably but serve distinct functions:
Monitoring is ongoing — tracking what's happening as programmes run. Monitoring answers: Are we doing what we said we'd do? Are beneficiaries being reached? Are we on track against milestones?
Evaluation is periodic and deeper — assessing whether programmes are working and why. Evaluation answers: Did we achieve the outcomes we intended? What worked and what didn't? Was the theory of change correct?
Both are necessary. Monitoring without evaluation tells you what happened but not why, or whether it mattered. Evaluation without monitoring has no data to analyse.
A logic model (or programme logic) makes explicit the chain from activities to outcomes — the theory of change. Good M&E starts here:
Inputs → Activities → Outputs → Outcomes → Impact
Most grant reporting tracks outputs well. Effective M&E tracks outcomes — the changes that funders ultimately care about.
Pre/post surveys
Measuring participant knowledge, skills, attitudes, or behaviours before and after a programme. The simplest quantitative evidence of individual-level change. Limitations: self-report bias, no comparison group.
Validated scales
Where available, using validated psychological or social measures (e.g., Kessler-10 for psychological distress, validated financial literacy scales). More credible than custom surveys, enables comparison with population norms.
Observation and assessment
Direct observation or professional assessment of participant progress — useful in education, health, and skills training contexts where an objective third-party assessment is feasible.
Case studies and stories
Qualitative narratives that illustrate impact — the human story behind the numbers. Not scientifically rigorous but powerful for communicating impact and capturing what quantitative measures miss.
Administrative data
Using routinely collected data (school attendance, hospital admissions, court records) to track outcomes — more objective than self-report, avoids survey fatigue. Requires data access agreements.
Comparison groups
Comparing outcomes for programme participants against a comparable group who didn't participate — the gold standard for attributing outcomes to the programme. Can be quasi-experimental (matching, regression discontinuity) or randomised (expensive and often impractical for community programmes).
Internal evaluation
Grantees conduct their own evaluation — using staff or volunteers. Cost-effective, builds internal capability. Limitations: potential bias, may lack evaluation expertise.
External evaluation
An independent evaluator assesses programme outcomes. More credible than internal evaluation. Appropriate for significant grants where independence matters for accountability.
Developmental evaluation
Evaluation embedded in programme development — not just summative assessment but ongoing feedback that shapes programme design. Appropriate for innovative programmes where learning and adaptation are central.
Kaupapa Māori evaluation
Evaluation conducted within a Māori framework — using Māori concepts of success, Māori evaluators, and methods appropriate to Māori cultural contexts. Increasingly recognised and required for grant-funded programmes in Māori communities.
At programme design stage:
- Define outcomes clearly before any activity begins
- Establish baseline data (what does the situation look like before intervention?)
- Design data collection systems alongside programme delivery systems
- Plan evaluation budget as part of total programme cost
In grant agreements:
- Specify what outcomes the grant is intended to achieve
- Define minimum data collection requirements
- Budget for evaluation (typically 5-10% of programme cost for simple evaluations, 15-20% for external evaluations)
During delivery:
- Regular data collection built into programme operations
- Monitoring against milestones with early warning of off-track delivery
- Willingness to adapt based on what monitoring reveals
At programme end:
- Collation and analysis of outcome data
- Honest assessment of what worked and what didn't
- Learning documented and shared — internally and with funder
Attribution: programmes don't operate in isolation. Other factors — economic conditions, other interventions, natural events — also affect outcomes. Attributing outcomes to a single grant programme is always partial.
Long causal chains: the outcomes funders care about (reduced poverty, improved health outcomes) are often far downstream from programme activities. Programmes can achieve intermediate outcomes (skills gained, attitudes changed) without the long-term outcomes being visible within the grant period.
Participant privacy: collecting outcome data from vulnerable populations requires careful attention to privacy, consent, and data security. New Zealand's Privacy Act 2020 applies.
Kaupapa Māori alignment: Western evaluation frameworks don't always fit Māori community programmes. Outcomes may be relational, spiritual, and cultural rather than individually measurable.
Resource constraints: small organisations face genuine resource constraints for evaluation. Proportionate M&E is more achievable than comprehensive evaluation — and more likely to be sustained.
Funders shape M&E practice across their portfolio:
Tahua's grants management platform supports funders and grantees building M&E into grant programmes — with outcome tracking, milestone monitoring, evaluation report management, and the portfolio analytics that help funders understand impact across their entire grants portfolio.