Learning and Evaluation in Grant Programmes: How Funders Know What Works

Most grant programmes collect data — grantee progress reports, financial acquittals, output counts — but far fewer use that data to genuinely learn whether their grantmaking is working and how to improve it. The gap between data collection and learning is one of the most persistent challenges in philanthropy: well-intentioned reporting requirements produce oceans of information that nobody synthesises into actionable insight.

Genuine learning and evaluation in grantmaking is rarer than it should be, and more valuable than it appears. Foundations that genuinely learn from their grantmaking make better decisions, communicate more honestly about their impact, and build the evidence base that others can use.

The distinction between monitoring and evaluation

Monitoring is ongoing: tracking whether grants are being implemented as planned, whether outputs are being produced, whether grantees are meeting their commitments. Monitoring data comes primarily from grantee reports. It answers: "Is this happening?"

Evaluation is deeper: assessing whether the funded work is actually achieving the intended outcomes, why (or why not), and what this means for future grantmaking. Evaluation answers: "Is this working? Why? What should we do differently?"

Most grant programmes do monitoring reasonably well. Far fewer do meaningful evaluation. The reasons are structural:
- Evaluation costs money (typically 5-10% of programme budget for rigorous evaluation)
- Evaluation results may be unflattering or inconclusive — which is uncomfortable for funders to share
- Evaluation requires technical capacity that many grant teams don't have in-house
- There's often little external pressure to evaluate — grantors aren't accountable to anyone who will ask hard questions about impact

Types of evaluation in grantmaking

Programme-level evaluation: Assessment of whether the overall grant programme is achieving its intended outcomes — not individual grants, but the strategy as a whole. This typically involves commissioning an independent evaluator who reviews portfolio data, interviews grantees and stakeholders, and produces a strategic assessment.

Grantee-level evaluation: Supporting or commissioning evaluation of specific significant grants or programmes within the portfolio. Useful for learning about particular approaches and for generating evidence that can inform future grantmaking.

Portfolio analysis: Using aggregate data from grantee reports to understand patterns across the portfolio — which types of organisations, approaches, or contexts are associated with better outcomes. This is cheaper than formal evaluation and can yield useful learning, though it has significant methodological limitations.

Rapid learning cycles: Shorter, less rigorous learning processes designed to generate actionable insight quickly — surveys of recent grantees, convening grantee practitioners to share experience, structured reflection sessions with grant teams.

Sector research: Commissioning or using existing research about what works in a given domain to inform grantmaking strategy, rather than trying to evaluate your own grants in isolation.

Designing for learning from the start

Programmes designed with learning in mind produce better evidence:

Clear theory of change: If you can't articulate how your grants should produce outcomes, you can't evaluate whether they're doing so. A clear theory of change — with testable assumptions — makes evaluation possible.

Consistent outcome tracking: If every grantee tracks and reports on the same outcome indicators, the data can be aggregated at portfolio level. If each grantee reports on different indicators, no portfolio-level learning is possible.

Baseline data: Outcomes can only be measured if you know where grantees started. Requiring baseline data at the start of a grant enables genuine outcome measurement at the end.

Comparison opportunities: The strongest evaluations have some form of comparison — funded vs. unfunded communities, before vs. after, different approaches to similar goals. Designing for comparison opportunities — even informally — strengthens learning.

Longitudinal tracking: Many outcomes take longer than a grant period to manifest. Following grantees beyond the grant period — even through brief check-ins — enables longer-term learning.

Building a learning culture

Systemic learning requires cultural change, not just better systems:

Psychological safety for honest reporting: If grantees believe that honest reporting of challenges will jeopardise their funding, they'll report positive framing regardless of reality. Funders who respond to honest challenge reporting with support rather than scrutiny encourage the honesty that genuine learning requires.

Internal learning reviews: Regular internal conversations — what are we learning from our grants? What's surprising? What should we do differently? — build learning habits that formal evaluations alone can't create.

Sharing what you learn: Publishing evaluation findings — including when they're inconclusive or unflattering — contributes to the sector evidence base and demonstrates genuine commitment to learning.

Rewarding candour: Grantees who are willing to share what's working AND what isn't deserve recognition, not suspicion. Foundations that explicitly value honest reporting attract more honest partners.

Learning from failure: Grant programmes that never fund things that fail are probably not taking enough risk. The philanthropic system needs to be able to fund and learn from failure rather than only documenting success.

Practical evaluation design

For foundations beginning to engage more seriously with evaluation:

Start with questions, not methods: What do you genuinely not know that you'd most like to understand? The most useful evaluation answers the questions that most affect future decisions.

Match rigor to stakes: A large, multi-year programme that consumes a significant share of grant budget deserves rigorous evaluation. A small pilot deserves a lighter learning approach.

Consider external review: Independent evaluators bring objectivity that internal assessment can't. Even a light-touch external review adds credibility that internal reporting lacks.

Involve grantees: The best evaluations involve grantees as active participants — in designing evaluation questions, in sharing learning, and in interpreting findings. Extractive evaluation (done to grantees rather than with them) misses important perspectives.

Act on findings: Evaluation that doesn't change anything isn't worth doing. Before commissioning evaluation, ask: what would we do differently if we found X? If the answer is "nothing," the evaluation isn't worth the investment.


Tahua's grants management platform supports learning and evaluation with consistent outcome tracking across the grant portfolio, grantee report aggregation, and the analytics that help funders identify patterns in their portfolio data — turning grantee reporting from a compliance exercise into a genuine learning resource.

Book a conversation with the Tahua team →