Assessment criteria are the standards against which grant applications are evaluated. They determine what kind of projects get funded, how assessors compare applications, and — when communicated to applicants — what applicants focus on in their submissions. Poorly designed criteria produce inconsistent assessment, fund the wrong things, and generate appeals. Well-designed criteria produce reliable, defensible decisions that serve the programme's goals.
Assessment criteria serve multiple functions simultaneously:
Guide assessment. Criteria give assessors a common framework for evaluating applications — so that different assessors are considering the same dimensions and can produce consistent, comparable scores.
Shape applications. When criteria are communicated to applicants in advance, they signal what to focus on. Clear criteria produce more focused, relevant applications.
Demonstrate fairness. Documented criteria provide evidence that assessment was conducted fairly and consistently — important for both applicant confidence and probity defence.
Connect strategy to selection. Criteria should reflect the programme's strategic priorities — funding what the programme is designed to fund, not what happens to be proposed.
Most grant programmes assess applications across some combination of these dimensions:
Impact or benefit. What difference will the funded activity make? How many people will benefit? How significant is the need? Is the proposed outcome meaningful relative to the grant size?
Approach or methodology. Is the proposed approach evidence-based, credible, and likely to achieve the stated outcomes? Is there a clear logic connecting activities to outcomes? Is the approach appropriate for the target population?
Organisational capacity. Can the applicant deliver this project? Do they have relevant experience, appropriate governance, and financial management capability? Is the staffing plan credible?
Value for money. Is the cost per unit of outcome reasonable? Is the budget justified? Is there leverage from other funding sources?
Alignment with programme priorities. Does the application specifically address the programme's focus areas, target populations, or geographic scope?
Risk and feasibility. Are the risks identified and managed? Is the timeline realistic? Are the key assumptions credible?
Not all programmes need all of these dimensions. A small community grant programme might assess on impact and capacity only; a major research grant might also assess methodology and value for money in depth.
The relative weight of different criteria should reflect the programme's priorities:
Equal weighting — each criterion contributes equally to the total score — is simple and appropriate when all dimensions are equally important.
Differential weighting — different criteria contribute different proportions to the total — is appropriate when some dimensions are more important to the programme. A capacity-building programme might weight organisational capacity most heavily; an innovation programme might weight approach and novelty most heavily.
Threshold requirements — minimum scores on specific criteria that must be met regardless of total score — are appropriate for criteria that are essential rather than just important. If an application fails to demonstrate any impact, no amount of organisational capacity makes it fundable.
Assessment criteria need descriptors — specific descriptions of what a strong response looks like at different score levels. Without descriptors, two assessors using the same criteria can reach very different scores because they have different implicit standards.
Good descriptors are:
- Specific — not "strong application" but "demonstrates clear theory of change with specific, measurable outcomes"
- Anchored — different score levels are anchored by specific observable features, not vague adverbs ("excellent," "satisfactory," "poor")
- Calibrated — the score levels make practical distinctions that assessors can actually observe in applications
Example of a poor descriptor:
5 = Excellent
3 = Satisfactory
1 = Poor
Example of a better descriptor:
5 = Theory of change is clearly articulated with specific, measurable outcomes; strong evidence that the approach will achieve stated outcomes; realistic and well-justified timeline
3 = Theory of change is present but outcomes are vague; approach is plausible but not strongly evidenced; timeline has some unexplained gaps
1 = No clear connection between activities and outcomes; approach lacks credibility; timeline is unrealistic
Assessment criteria should be published before the application round opens — in the grant guidelines. The benefits of transparency:
Some funders worry that communicating criteria will lead to applications that are formulaic or designed to game the scoring. In practice, the opposite is usually true — clear criteria produce more relevant applications, not more gaming.
Different grant types require different assessment approaches:
Community and social grants. Emphasis on need, local benefit, organisational capacity, and value for money. Assessment is often qualitative.
Research grants. Strong emphasis on methodology, research design, and investigator track record. Peer review with domain-specific expertise is standard.
Capital and infrastructure grants. Emphasis on project feasibility, procurement process, consenting, and long-term maintenance. Technical assessment is important.
Innovation grants. Emphasis on novelty, approach credibility, and potential for scale or replication. Tolerance for risk is part of the programme design.
Multi-year strategic grants. Emphasis on long-term organisational health, strategic alignment, and accountability framework alongside project quality.
Too many criteria. Trying to assess 10 or more dimensions in one round leads to superficial assessment of each. 4-6 criteria allow for meaningful depth.
Overlapping criteria. Criteria that assess the same thing (e.g., "project design" and "project approach" with the same descriptors) waste assessor time and dilute the signal.
Criteria that can't be assessed from the application. If the application form doesn't provide information against a criterion, the criterion can't be meaningfully assessed. Criteria must match what's asked in the application.
Misaligned criteria. Criteria that don't reflect the programme's goals — assessing what's easy to assess rather than what the programme is trying to achieve — fund the wrong things.
Unstated weights. Weights that are implicit to programme staff but not communicated to applicants or assessors create inconsistency. Weights should be stated explicitly.
Tahua's assessment platform supports configurable criteria, weighted scoring, and calibration rubrics — giving assessors a consistent framework and programme managers defensible selection decisions.