How to Design a Grant Programme: A Practical Guide for New Funders

Starting a new grants programme involves two distinct challenges that are easy to conflate. The first is strategic: what do you want to fund, at what scale, and for what purpose? The second is operational: how will applications be received, assessed, decided, and followed up? Most programme design attention goes to the strategic questions. Most operational problems stem from decisions made too quickly, or not made at all, in the operational phase.

This guide is focused on the operational design questions — the ones that determine how much work your programme will generate and whether that work produces defensible, auditable outcomes.

Starting with the assessment framework

The assessment framework is the most consequential design decision in any grants programme, and it needs to be made before you open applications.

The assessment framework is the set of criteria against which applications will be evaluated, along with the weighting of those criteria. "We will assess on merit" is not an assessment framework. "We will assess on strategic alignment (40%), community impact (35%), and organisational capability (25%)" is the beginning of one.

Why does this matter so much? Because the criteria determine what your application form needs to ask, what documentation you need to request, and what expertise your assessors need to have. Building a form without a confirmed assessment framework produces a form that doesn't generate the information needed to assess properly.

Common design errors at this stage:
- Too many criteria (anything above six criteria adds complexity without proportionate benefit)
- Criteria that overlap or are not independently assessable
- Criteria that reflect what the funder values rather than what the funder can assess from an application
- Weighting that doesn't reflect the programme's actual priorities

The assessment framework also defines your documentation requirements. If "organisational financial health" is a criterion, you need to define what constitutes sufficient evidence — audited accounts, management accounts, bank statements? Getting this right at the design stage means applicants provide what you need and assessors know what to look for.

Eligibility: the screening layer that determines your workload

Eligibility criteria are often described as an equity question — who can apply — but they are equally a workload management decision. Every ineligible application that progresses through assessment creates rework, consumes assessor time, and creates risk of appeals.

Well-designed eligibility criteria:
- Are precise and objectively determinable from application data (not "significant community benefit" but "registered charity with a Charities Services number")
- Can be checked at point of application, not after assessment
- Match the programme's actual intended beneficiaries, not the funder's assumptions about who might apply
- Are consistent across application rounds (changing eligibility criteria between rounds is a common source of applicant confusion and complaints)

Eligibility screening is most effective when it is built into the application portal. An applicant who is screened out at step two of a form has wasted less time than one who completes a forty-question form before learning they were ineligible. Guided eligibility checks at the start of an application process reduce the volume of ineligible applications reaching assessment and improve the applicant experience for everyone.

The question of what to do with borderline eligibility cases — applications that appear to meet the criteria but require judgement — needs to be decided in advance. If borderline cases go to the programme manager for a ruling, that needs a documented process. If they are assessed and flagged, that needs a documented process. Improvising a process for the first borderline case under time pressure rarely produces a consistent outcome.

Panel design: who assesses and under what rules

The assessment panel structure is a decision with probity implications. Panel members are typically selected for expertise, but their relationships with applicants and the sector create conflicts of interest that need to be managed architecturally, not just procedurally.

Key panel design decisions:

Panel size. A panel of two creates a tie situation with no resolution mechanism. A panel of five creates consensus challenges. Three to four members typically provides adequate breadth while keeping the deliberation manageable.

Independence requirements. For government and Crown entity funders, panel independence is a probity requirement, not just good practice. For charitable trust funders, it is typically a governance expectation. Define independence requirements explicitly: does an assessor with a prior professional relationship with an applicant organisation have a conflict? What about an assessor who has previously received a grant from an applicant? These are real scenarios that need a policy, not a case-by-case judgement.

Assessor expertise. A panel composed entirely of sector insiders will have extensive conflicts of interest. A panel composed entirely of outsiders will lack the domain knowledge to assess applications accurately. The design question is the right balance for your specific programme — and whether you have a clear protocol for managing the inevitable conflicts that arise from sector expertise.

The convenor role. The panel convenor is not a neutral facilitator. They are responsible for the integrity of the assessment process — ensuring COI declarations are complete, managing conflicts when they arise, keeping the assessment on criteria, and producing a panel record that documents the recommendation and its basis. This is a substantial responsibility that needs to be explicitly delegated, not assumed.

Documentation and the audit trail

Before you open applications, decide what records you need to produce to satisfy your governance and accountability obligations. The answer depends on who the funder is and what those obligations are.

For government and Crown entity funders, the audit trail standard is set by Treasury's better practice guidance and the OAG's expectations for contestable funding. The minimum is: a record of every application received, every eligibility assessment, every COI declaration, every scoring event, the panel recommendation, the decision, and the decision communication. Each record needs a timestamp and attribution.

For charitable trust funders, the governance obligation is typically to the board and to the Charities Act. The board needs to be able to satisfy itself that the assessment process was fair and that funds were allocated consistently with the trust deed. That requires a panel record that explains the basis for each allocation decision — not just a list of approved grants.

The documentation standard you design for is the standard your records management system needs to support. A system that produces timestamped records automatically, without requiring manual compilation, is much more likely to actually produce those records than one that relies on someone remembering to do it.

Round timing and applicant communications

The operational calendar of a grant round is a design decision with significant downstream consequences.

Application open period. Too short, and well-qualified applicants who need time to prepare a quality submission are disadvantaged. Too long, and you generate administrative overhead managing queries and holding the process open. Four to eight weeks is typical for most programme types, with shorter periods sometimes appropriate for emergency or responsive grants.

Assessment window. The time between applications closing and decisions being communicated needs to be realistic for the assessment process you have designed. If you have a complex weighted scoring process with a four-person panel and a governance sign-off requirement, four weeks is not a realistic assessment window. Setting an unrealistic assessment timeline creates pressure to shortcut the process.

Decision communications. The timing and content of decision communications affects applicant experience and manages expectations. Successful applicants need to know what happens next — what conditions are attached, when funding will be released, what reporting is required. Unsuccessful applicants deserve a meaningful explanation of the basis for the decline, even if that explanation is brief. Both sets of communications need to be drafted and approved before you communicate them.

Starting with a system that scales

The most common pattern in grants programme design is to start with a manual, email-based process with the intention of reviewing it once the programme is "established." The problem is that established programmes are harder to change — they have more applicants, more grant recipients, more staff who have built workflows around the existing process, and more history that needs to be migrated.

The design stage is the right time to make decisions about the operational system. A purpose-built grants management platform, configured at the outset to match your assessment framework, eligibility criteria, and panel structure, produces a programme that is auditable from day one rather than one that needs to be retrospectively documented when an audit or OIA request arrives.

The most important criteria for a system at programme launch:

  • Can the application form be configured to your exact questions and document requirements?
  • Does the system handle COI declarations and assessor assignment automatically?
  • Is the assessment scoring structured and recorded — or does it happen outside the system?
  • Can the panel record be produced automatically from the assessment data, or does someone need to compile it?
  • Does the decision communication workflow create a logged record of every letter sent?

If the answer to any of these is "we'll handle that manually for now," the decision you are actually making is to create debt that will be resolved either by a future system migration or by an audit finding.


If you are designing a new grants programme and want to see how Tahua handles configuration from the design stage, the how it works overview covers the setup and assessment workflow. To discuss your specific programme requirements, book a conversation.