Grant Programme Evaluation: How Funders Measure What Their Grants Actually Achieve

Most grants programmes are good at tracking whether grants were delivered as intended. The money was released, the reports were received, the milestones were checked off. The audit trail is intact.

What is harder — and less commonly done well — is evaluating whether the grants programme is actually working. Not whether the process was followed, but whether the thing the programme was designed to achieve is, in fact, being achieved. Whether the funded organisations are producing meaningful outcomes. Whether the assessment process is selecting the grants most likely to produce those outcomes. Whether the programme itself is the right vehicle for the funder's objectives.

These are evaluation questions. They sit above the process level and require a different kind of inquiry.

The difference between monitoring and evaluation

Grants monitoring is ongoing. It tracks the status of active grants: milestones due, reports received, conditions met, payments released. It is primarily backward-looking (what has happened) with some forward-looking alerts (what is coming due). It asks whether the grant is on track.

Grants evaluation is periodic. It looks across a programme or portfolio and asks whether the programme is working. It is necessarily aggregate — looking at patterns across multiple grants — and is typically more interpretive than monitoring. It asks whether the programme is producing its intended outcomes.

Funders who conflate monitoring and evaluation typically end up doing monitoring only. They can tell you whether every grantee submitted their mid-year report on time. They cannot tell you whether the portfolio of funded organisations is achieving anything that the community noticeably benefits from.

Building the evaluation questions before you start

The most common failure in grant programme evaluation is not doing it. The second most common is attempting to evaluate retrospectively against questions that were never specified in advance.

To evaluate whether a grants programme achieves its objectives, you need:
1. A clear statement of what the programme is supposed to achieve
2. A theory of change — how is funding organisations supposed to lead to those outcomes?
3. Indicators that can be measured to assess progress toward the outcomes
4. A baseline — what was the situation before the programme began?
5. Attribution reasoning — how much of any observed change can be attributed to the programme vs. other factors?

Most programmes have something like the first two. Very few have specified indicators, and almost none have baseline data collected before the first round opened.

The practical implication is that a programme evaluated five years after it began, without pre-specified indicators and baseline data, is extremely difficult to evaluate meaningfully. You can document outputs (grants made, organisations funded, amounts disbursed). You cannot demonstrate outcomes (what changed as a result of the funding) with any confidence.

Starting evaluation thinking at programme design is not bureaucratic overhead. It is the difference between being able to show funders, governance, and the community that the programme worked — and not being able to.

Output versus outcome versus impact

Evaluation frameworks typically distinguish between outputs, outcomes, and impacts. The distinction matters because programmes tend to report on what they can easily count, which is usually outputs, and call it evaluation.

Outputs are the direct products of funded activity. Number of workshops delivered. Number of participants trained. Hectares of land treated. Jobs placed. These are measurable from delivery records and can usually be reported with high confidence.

Outcomes are the changes that result from those outputs. Participants who applied their training to improve their organisations. Land where the treated pest species did not re-establish. Individuals who maintained employment twelve months after placement. Outcomes are harder to measure and require follow-up beyond the grant period.

Impacts are the longer-term changes in the community, environment, or sector that the programme is trying to contribute to. Reduced inequality. Healthier ecosystems. Stronger civil society. Impacts are partially attributable to the programme — other factors contribute — and take years to manifest.

A programme that reports only outputs is telling you what it bought. A programme that reports on outcomes is beginning to tell you whether it worked. A programme that engages seriously with impact is asking whether it is contributing to the change it exists to create.

Most funder reporting requirements focus on outputs because outcomes are harder to specify, harder to measure, and harder for grantees to attribute confidently. Building outcome measurement into grant conditions — requiring grantees to collect specific data and report it alongside their financial accountability — starts to address this gap.

Using portfolio data for evaluation

A grants management system that captures structured data across a portfolio creates the substrate for aggregate evaluation. If every grant in a programme is recorded against the same criteria, the same outcome indicators, and the same accountability structure, the portfolio data becomes evaluable.

Funders who can query their portfolio and produce analysis like "X% of funded organisations in this cohort achieved their primary outcome indicator" or "applications from this organisation type systematically outperformed others in the portfolio" are in a position to make evidence-based decisions about programme design.

This requires that the data be structured — not a collection of PDF reports, but a database of comparable records. It requires that outcome indicators be defined consistently enough to aggregate across grants. And it requires someone with the analytical capability to interpret the data and draw the right conclusions from it.

For most funders, this is aspirational rather than current practice. But the gap between current practice and this aspiration is smaller than it seems. It begins with defining the programme's theory of change, specifying outcome indicators for each grant, and building those indicators into the accountability structure — including the data captured in the grants management system.

When evaluation should lead to programme change

Evaluation findings are only valuable if they inform decisions. The most useful thing evaluation can do is justify a change to the programme — a change in criteria, a change in target population, a change in grant size, a change in the programme's theory of change.

The institutional environment for this kind of learning is not universal. Some governance cultures reward stability and view changes to programme design as admissions of error. Some funders have accountability relationships to government that make significant programme changes difficult without external approval.

But where the institutional conditions allow, evaluation that leads to programme change is evaluation that earns its cost. The funder who can say "our 2022 evaluation showed that grants to organisations without governance capacity were not succeeding, so we added a capacity-building component to the programme" is demonstrating the kind of learning that differentiates effective funders from ones that simply distribute money and hope for the best.


For funders designing grants programmes with evaluation built in from the start, the how it works overview covers Tahua's reporting and portfolio management capabilities. Charitable trust and foundation funders should also look at the community foundations page. To discuss how to build evaluation-ready accountability into your programme, book a conversation.