Grant Programme Review: How Funders Assess and Improve Their Grantmaking

Effective grantmakers don't just distribute funds — they learn, improve, and adapt. Grant programme reviews — systematic examinations of how well a grant programme is achieving its goals — are one of the most important tools for improving philanthropic practice. Yet many foundations conduct reviews infrequently, superficially, or primarily for external accountability rather than genuine learning. This article explores what good grant programme reviews look like and how they improve grantmaking.

Why review grant programmes?

Grant programmes are designed with assumptions — about how change happens, what organisations can achieve, which approaches work. Experience tests these assumptions. Some prove accurate; others don't. Without systematic review, funders continue investing in approaches that don't work, miss opportunities for improvement, and fail to learn from what their grantees are experiencing.

Strategic alignment: Grant programmes drift. Individual decisions accumulate without strategic review. A periodic review examines whether the portfolio of grants actually reflects current strategic priorities or whether it has evolved through ad hoc decisions.

Programme effectiveness: Are grants achieving what they were intended to achieve? Are the funded activities actually producing the intended outcomes? A rigorous review examines evidence of effectiveness, not just activity.

Grantee experience: How do grantees experience the relationship? Are application and reporting requirements proportionate? Is decision-making timely and clear? Is the funder easy to work with? Grantee perspectives are often the most valuable and least systematically gathered.

Operational efficiency: Are staff and systems working well? Are processes proportionate to grant size? Is the cost of running the programme appropriate relative to its scale?

Emerging context: Has the external environment changed since the programme was designed? New evidence, policy changes, emerging needs, or sector developments may require programme adaptation.

Types of grant programme reviews

Annual programme review

A light-touch review conducted at the end of each grant year — examining portfolio composition, application and assessment process, decision patterns, and grantee outcomes. Annual reviews inform refinements to guidelines, processes, and priorities without requiring major strategic revision.

Strategic review

A deeper review conducted every 3-5 years, or when significant changes in context or performance warrant it. A strategic review examines the programme's theory of change, evidence of impact, strategic alignment, and whether the programme is still the right response to the problem it was designed to address.

External evaluation

An independent evaluation commissioned from an external evaluator — typically used for major grant programmes, or when a foundation wants an assessment without internal bias. External evaluations are more expensive but provide more credibility and an outside perspective.

Grantee experience survey

A structured survey of current and past grantees examining their experience of the funder relationship. Tools like the Center for Effective Philanthropy's Grantee Perception Report provide benchmarked data against other funders.

Portfolio analysis

A systematic analysis of the grant portfolio — by geography, issue area, organisation type, grant size, and duration — examining whether the portfolio reflects strategic intent and identifying gaps or imbalances.

Review framework

A comprehensive grant programme review typically examines:

Design
- Is the programme theory of change sound? Is the logic connecting funding to outcomes credible?
- Are eligibility criteria appropriate? Do they include the right organisations and exclude the wrong ones?
- Is the grant amount and duration appropriate for what's being funded?
- Are programme guidelines clear and accessible?

Process
- Is the application process proportionate and accessible?
- Is assessment rigorous and consistent?
- Is decision-making timely?
- Is communication with applicants clear and respectful?
- Are reporting requirements appropriate?

Portfolio
- Does the portfolio reflect stated strategic priorities?
- Is the geographic and organisational type mix appropriate?
- Are there gaps in the portfolio?
- Is there appropriate diversification across approaches?

Impact
- What is the evidence of impact from funded activities?
- Are grantees achieving their stated outcomes?
- Is there evidence that the programme is contributing to change at the population/system level?
- What does external evidence say about the approaches being funded?

Learning and adaptation
- Is learning being generated from the programme?
- Is that learning being used to improve the programme?
- Are grantees sharing learning with each other and with the funder?

Gathering data for reviews

Funder data: Application data, grant decision data, payment records, grantee reports, assessment notes, and staff reflections — all held in grants management systems.

Grantee surveys and interviews: Structured surveys or semi-structured interviews with current and past grantees; best conducted by an independent third party to reduce social desirability bias.

Site visits and observations: Direct observation of grantee work provides richer understanding than written reports.

External evidence review: Literature on the approaches being funded; evaluation evidence from comparable programmes; expert perspectives on the issue area.

Stakeholder consultation: Views from people with relevant expertise — issue area experts, sector peers, beneficiary communities.

Acting on review findings

Reviews only add value if their findings are acted on. Common responses to review findings:

Strategy refinement: Updating programme priorities, theory of change, or strategic focus based on evidence of what's working and what isn't.

Process improvement: Simplifying application requirements, improving communication, streamlining assessment, reducing reporting burden.

Portfolio rebalancing: Increasing or decreasing grants in particular areas, adding new grantees, or exiting from areas where the programme isn't having impact.

Grantee support: Identifying capability gaps and providing capacity building or technical assistance to address them.

Learning publication: Sharing what's been learned — including what hasn't worked — with the broader field.

Creating a learning culture

The most effective foundations embed review and learning into their ongoing culture — not treating it as an occasional compliance exercise. Features of a learning culture:

  • Staff are expected to reflect on what's working and share learning
  • Grantees are asked regularly about their experience
  • External evidence is systematically reviewed
  • Mistakes are examined for lessons, not just defended
  • Strategy is treated as a hypothesis to be tested, not a fixed plan

Tahua's grants management platform makes programme review easier — with the portfolio analytics, grantee reporting data, assessment records, and trend analysis tools that give funders the information they need for meaningful programme review.

Book a conversation with the Tahua team →