The most effective grantmakers are learning organisations — systematically gathering evidence about whether their grants are working, reflecting on what they're learning, sharing knowledge with peers, and adapting their practice over time. Yet learning is often the least resourced function in grantmaking, treated as a nice-to-have rather than a core investment.
This guide covers how funders build genuine learning practice into their grantmaking — moving beyond compliance-focused reporting toward knowledge that genuinely improves grantmaking decisions.
Grantmaking is making bets under uncertainty. Funders don't know which applications will produce the best outcomes at the time of decision — they're making evidence-informed judgments about likely future performance. Better learning improves the evidence base for those judgments over time.
Grantees have information funders don't have. The people delivering funded programmes know things about what's working and what isn't that programme reports don't capture. Creating genuine channels for grantee knowledge to inform funder practice is a significant opportunity.
The sector benefits from shared learning. Many funders are working on similar problems, funding similar organisations, and making similar mistakes. Knowledge-sharing across funders — about what approaches work, what doesn't, what evaluation evidence exists — creates a public good that individual funders can't create alone.
Accountability without learning is bureaucracy. Reporting that isn't read, evaluation that doesn't change decisions, feedback that isn't acted on — these are accountability mechanisms without learning outcomes. They impose burden without producing benefit.
Application questions that generate learning. Application questions that ask about evidence of effectiveness, theory of change, and previous learning create baseline learning data. What do applicants think will work? Why?
Assessment as learning. Assessment panels that discuss not just which applications to fund but what the portfolio of applications reveals about the sector — what problems are coming up, what approaches are common, what gaps exist — generate field intelligence alongside funding decisions.
Progress reports as learning. Progress reports designed to capture what's working, what's changed, and what the grantee is learning — not just what activities have been delivered — generate learning data. Programme staff who read reports with curiosity (what can I learn?) rather than compliance (have the boxes been ticked?) extract more value.
End-of-grant conversations. Rather than (or alongside) a final written report, a reflective conversation between programme officer and grantee at grant completion often generates richer learning than written reports alone. What surprised you? What would you do differently? What should funders know?
Post-funding follow-up. What happened after the grant ended? Did the programme continue? Did the outcomes persist? Following up with grantees 12-24 months after grant completion generates impact evidence that end-of-grant reports can't provide.
The most commonly missed learning opportunity is feedback from grantees on the funder's own practice. What was it like to apply? What was the relationship like? What did the funder do well, and what was unhelpful?
Anonymous feedback mechanisms. Grantees are unlikely to give honest critical feedback through channels where they can be identified — they risk jeopardising future funding relationships. Anonymous surveys, run through independent research firms or philanthropy sector bodies, produce more candid responses.
Systematic collection. Feedback collected systematically — from all grantees and declined applicants, not just the ones the funder chooses to ask — provides more representative data than selective relationship conversations.
Acting on feedback. Feedback that isn't demonstrably acted on quickly erodes trust. If grantees say the application form is too long and nothing changes, they conclude their feedback doesn't matter.
Sharing back. Sharing the aggregated results of grantee feedback surveys — what grantees said, what the funder learned, what it's changing — demonstrates genuine responsiveness and respect for grantee perspectives.
Philanthropy sector bodies. Philanthropy New Zealand and Philanthropy Australia both facilitate peer learning among their members — through networks, conferences, and knowledge resources. Participation in these networks is an important learning investment.
Funder affinity groups. Sector-specific funder networks (health funders, environment funders, arts funders) facilitate learning among funders working in the same field. These peer networks share evaluation evidence, discuss common challenges, and coordinate to avoid duplication.
Published evaluation and learning. Some funders publish their programme evaluations — including honest accounts of what didn't work. These publications contribute to sector knowledge and model a learning culture. The sector has too little published evaluation data; every funder who publishes makes the whole field better informed.
Co-evaluation with grantees. Some funders commission evaluations that actively involve grantees as co-evaluators — designing the evaluation questions together, collecting data collaboratively, and interpreting findings jointly. Co-evaluation produces richer evidence and builds grantee evaluation capability.
Regular reflection practices. Programme teams that have regular structured reflection — after each round, after site visits, after grantee conversations — integrate learning into their practice rather than treating it as a separate activity.
Learning goals for programme staff. Staff development goals that include learning from grantees, from evaluation, and from peer networks signal that learning is valued work, not just an add-on.
Psychological safety. Learning cultures require psychological safety — the ability to acknowledge mistakes, discuss failures, and challenge assumptions without fear. Leaders who model intellectual humility and openness to being wrong create conditions for genuine learning.
Tahua supports grantmaker learning with structured outcome data collection, configurable reporting that captures qualitative learning alongside metrics, and the historical grant records needed for retrospective evaluation and learning.