The most effective grantmakers are learning organisations. They systematically gather evidence about what their grants are achieving, use that evidence to improve their programmes, and are willing to change direction when the evidence points that way. This is harder than it sounds — it requires investing in evaluation when there's always a temptation to spend more on grants, and it requires genuine openness to finding that your programme isn't achieving what you hoped.
Grantmaking at scale involves significant resources and public trust. Community trusts, gaming trusts, and foundations hold and deploy funds on behalf of communities and donors. The question of whether those resources are achieving their intended purposes is not optional — it's fundamental to the legitimacy and effectiveness of the grantmaking enterprise.
Beyond accountability, learning improves outcomes. A grantmaker who doesn't know what's working can't direct resources toward what's most effective. A grantmaker who doesn't understand why applications from certain communities are thin can't address the barriers. A grantmaker who doesn't track what happens to funded programmes after the grant ends can't understand sustained impact.
Programme learning — understanding how a specific grant programme is performing. Are we reaching the organisations we intended to reach? Are grants going to the priorities we set? What are grantees achieving with the funding?
Portfolio learning — understanding patterns across multiple programmes. Are we inadvertently concentrating funding in certain types of organisations? Are there geographic gaps? Is our funding distributed across different stages of change (service delivery vs. advocacy vs. systems change)?
Sector learning — understanding the broader field. What are other funders doing? What does the evidence say about effective approaches to the problems we're trying to address? How is the policy and funding environment changing?
Organisational learning — understanding the funder's own practice. Are our processes fair and efficient? How do applicants experience our programme? What barriers are we inadvertently creating?
Grantee reporting analysis — most funders receive reports from grantees. Too often, these reports sit in files unread or are scanned only for compliance. Systematic analysis of grantee reports — looking for patterns in outcomes, challenges, and learning — turns required reporting into valuable evidence.
Grantee surveys — proactively asking grantees about their experience: Was the application process proportionate? Did the grant achieve what we intended? What support would have been more helpful? Grantee surveys should be conducted by someone other than the grants staff the grantee deals with, to reduce social desirability bias.
Applicant surveys — asking unsuccessful applicants about their experience is particularly valuable. Why did they apply? What happened after they were declined? Was the process fair? What would have made it better?
Case studies and deep dives — selecting a small number of grants for detailed examination: visiting the programme, talking to participants, and understanding what actually happened compared to what was planned.
External evaluation — commissioning independent evaluation of a grant programme or portfolio. External evaluators bring objectivity and methodological expertise that internal learning may not. Best practice is to commission independent evaluation at regular intervals (every three to five years for ongoing programmes).
Counterfactual analysis — the hardest question in grantmaking evaluation is: what would have happened without the grant? Rigorous counterfactual analysis is methodologically demanding, but even simple questions — did organisations continue after the grant ended? Did the work they funded continue? — provide useful evidence.
Time pressure. Grants staff are typically managing many things simultaneously — assessment rounds, relationship management, reporting review, governance support. Finding time for systematic learning is difficult when operational demands are constant.
Fear of findings. Learning sometimes produces findings that are uncomfortable — a programme isn't reaching its intended beneficiaries, a theory of change isn't working, a large grant recipient isn't performing. Fear of uncomfortable findings is a real barrier to genuine learning.
Data poverty. Funders often don't collect the data they need to learn from their grants. Application data, grant amounts, and grantee reporting data are typically held in different systems or not systematically tracked. Without good data infrastructure, learning is ad hoc and anecdotal.
Short-term thinking. Grant impacts often unfold over years, but funder evaluation tends to focus on grant periods. Tracking what happens beyond the grant period requires sustained relationship and data systems that most funders haven't built.
Reporting that doesn't generate useful information. Many funders require grantee reports that generate compliance evidence but not programme learning. If your reports ask for outputs and financial accounting but not outcomes and learning, you'll get compliance evidence but not useful data.
Allocate resources. Learning requires time and sometimes money. Allocating 2-5% of grant budget for evaluation is standard in the sector and produces returns many times its cost.
Make learning a board priority. If governance doesn't value learning, staff won't invest in it. Board agendas that include systematic review of programme outcomes and evidence from evaluation signal that learning matters.
Share learning publicly. Funders who share what they're learning — including uncomfortable findings — contribute to sector-wide knowledge and build trust with applicants and communities. Transparency about what's working and what isn't is a mark of genuine learning orientation.
Act on what you learn. Learning that doesn't produce change is theatre. When evaluation shows a programme isn't working, be willing to redesign or stop it. When evidence shows a different approach is more effective, shift resources. Learning has value only when it influences decisions.
Close the loop with grantees. When grantees invest time in reporting and evaluation, sharing what you learned and what you're doing about it closes the loop and maintains trust. Grantees who see their reporting used are more likely to invest in future reporting.
Learning from your own programmes is valuable, but it's even more powerful when combined with evidence from the broader field. Evidence-informed grantmaking draws on:
Research literature. Academic research on the effectiveness of approaches in the areas you fund — health behaviour change, educational interventions, conservation approaches — provides external evidence about what tends to work.
Sector bodies and networks. Philanthropy New Zealand, Philanthropy Australia, and international networks produce and share evidence about effective grantmaking practice. Participation in these networks accelerates learning.
Peer learning. Other funders working in the same space have accumulated knowledge about what works. Formal peer learning arrangements and informal relationships are valuable sources of evidence.
Co-evaluation with grantees. Some funders and grantees design evaluation together, with shared ownership of findings. This produces richer evidence and builds grantee evaluation capacity.
Tahua helps grantmakers build the data infrastructure that makes learning possible — application data, outcome reporting, grant tracking, and reporting analytics that turn individual grant records into programme evidence.