Piloting a New Grant Programme: How to Test Before You Scale

Launching a new grant programme is an investment in the unknown. Funders are making assumptions about what applicants need, what works, what the right amount of funding is, and what accountability requirements are proportionate. Many of these assumptions will prove wrong.

Piloting — running a small-scale version of a new programme before committing to full scale — is one of the most valuable risk management tools available to funders. A well-designed pilot tests critical assumptions, generates learning, and enables programme refinement without the costs of getting a full-scale programme wrong.

Why pilots matter

Assumptions are cheap; mistakes are expensive. Every grant programme design embeds assumptions — about applicant behaviour, sector capacity, appropriate funding amounts, administrative burden, and likely outcomes. Piloting tests these assumptions while they're still relatively easy to revise.

Administration burden is easier to calibrate on a small scale. A pilot with 20 grants reveals how long applications take to assess, whether reporting requirements are proportionate, whether the grant agreement has gaps — at a manageable scale. These discoveries from a 500-grant programme are much more costly.

Learning compounds. The learning from a pilot doesn't just improve the programme — it improves the team's understanding of the sector, the problem, and what good grantmaking looks like for this purpose.

Stakeholder confidence. A piloted programme that has demonstrably been refined based on evidence is more credible to stakeholders — board, applicants, and partner funders — than one launched without testing.

Designing a grant programme pilot

Define clear pilot objectives. What are you testing? Typical pilot objectives:
- Test whether the target applicant population finds the programme accessible and relevant
- Assess administrative burden (staff time and applicant time)
- Test whether funded activities produce the intended outcomes at the assumed scale
- Identify gaps in programme design (eligibility criteria, funding amounts, reporting requirements)
- Test the assessment process and criteria

Scale the pilot appropriately. A pilot should be large enough to generate meaningful learning — not so small that every case is exceptional — but not so large that the programme's assumptions are deeply embedded before you can revise them. 15-30 grants is a common pilot scale for a new programme.

Select for diversity, not just quality. Pilot cohort selection should aim for diversity in organisation type, size, geography, and approach. A pilot dominated by a narrow applicant type won't reveal how the programme works across the intended range.

Build in evaluation from day one. A pilot without evaluation is just a small version of a programme. Define evaluation questions before the pilot starts: what evidence will you collect, when, and from whom?

Preserve the ability to revise. The point of a pilot is to learn and change. Pilot communications should be clear that the programme design may change based on learning — managing applicant expectations while preserving the flexibility to adapt.

What to test in a pilot

Application process. How long does it take applicants to complete an application? Where do they get confused or need support? Are eligibility criteria clear? Do the application questions generate useful information?

Assessment process. How long does assessment take per application? Are the criteria sufficient to make good decisions? What are the hardest assessment calls and why?

Funding amounts. Are the grant amounts calibrated appropriately to the work they're intended to fund? Are grants too small to be meaningful? Too large to absorb within the grant period?

Grant conditions and agreement. Do grantees understand and agree with the grant conditions? Are any conditions unworkable in practice?

Reporting requirements. Are reporting requirements proportionate to grant size? Do reports generate useful information? Are grantees finding reporting onerous?

Outcomes. Are funded activities generating the intended outcomes? Are there unexpected benefits or harms?

Evaluating and acting on pilot learning

Document learning systematically. Don't rely on informal team discussion. Structured evaluation — programme staff reflection, applicant feedback surveys, grantee feedback on the programme experience, and outcome data — produces better evidence than impression alone.

Distinguish between programme design issues and delivery issues. If pilots show poor outcomes, is that because the programme design is wrong, or because this particular cohort of grantees had delivery challenges? These have different implications.

Be genuinely willing to change. A pilot that generates learning but doesn't change the programme was a waste of effort. Board and leadership need to support meaningful programme revision based on pilot findings, including potentially significant changes to eligibility, criteria, or approach.

Document what you learned and why you changed what you changed. Programme design decisions informed by pilot learning should be recorded — not just what changed but why. This institutional memory is valuable for future programme reviews and for new staff.

From pilot to full programme

The transition from pilot to full programme involves:
- Incorporating pilot learning into programme design
- Scaling administrative infrastructure (staff capacity, systems, processes)
- Communicating the full programme in a way that builds on pilot credibility
- Maintaining learning practices established in the pilot

Some funders choose to run multiple pilot cohorts — iterating through two or three rounds of 20-30 grants — before committing to scale. This is particularly sensible for novel or high-risk programme designs.


Tahua supports new programme launches with configurable application forms, assessment workflows, and reporting frameworks that can be refined between rounds.

Book a conversation →