Grant assessment criteria are the bridge between a funder's programme objectives and its funding decisions. Well-designed criteria give assessors a consistent basis for evaluation, make decisions defensible, and produce outcomes aligned with the programme's intent. Poorly designed criteria produce inconsistent assessments, create disputes with applicants, and — for government funders — create accountability risk.
This guide is for funders designing or reviewing the assessment criteria for a grants programme, and for those evaluating whether their grants management system supports their criteria effectively.
Assessment criteria serve three functions simultaneously:
They communicate what the programme values. Before a single application is received, the criteria signal to potential applicants what the funder considers important. Criteria that emphasise community benefit signal different priorities from criteria that emphasise project budget efficiency. The criteria are a public statement of the programme's values.
They structure assessor evaluation. Criteria give assessors a consistent framework for comparing applications that may be very different from one another. Without criteria, assessment becomes impressionistic — assessors make holistic judgements that may or may not reflect the programme's intent, and that are difficult to reconcile across a panel.
They create a defensible decision record. When a funding decision is challenged — by an unsuccessful applicant, through an OIA request, or in an audit — the criteria are the basis for the defence. A decision made against explicit criteria is defensible. A decision made on unstructured impressionistic assessment is not.
Criteria that cannot be scored. Criteria need to be specific enough that assessors can score them. "The project is well-planned" is not a criterion — it is an impression. "The project includes a clear implementation timeline with identified milestones and responsible parties" is a criterion that can be scored.
Too many criteria. When programmes try to capture every desirable characteristic in the criteria, assessors face an impossible task. Ten criteria at equal weight diffuse focus. Three to five well-chosen criteria at appropriate weights are more effective and more defensible.
Inconsistent weighting. Criteria that are weighted equally when they are not equally important produce distorted scores. A programme that primarily cares about community reach should weight community reach criteria more heavily, not treat it as equivalent to administrative quality.
Criteria that do not connect to objectives. Assessment criteria should derive from the programme's objectives. If the programme objective is to fund innovative approaches to a specific problem, the criteria should explicitly assess innovation and relevance to the problem — not generic project management competence.
Eligibility criteria embedded in assessment criteria. Eligibility and merit are different things. Eligibility criteria determine whether an application can be considered. Assessment criteria determine how well it performs against the programme's goals. Mixing them creates confusion — an applicant who meets a threshold on an eligibility criterion should not receive additional merit points for exceeding it.
Start from the programme objective. What outcomes does this programme exist to produce? Assessment criteria should be designed to select the applications most likely to produce those outcomes.
Be specific about what assessors are evaluating. Each criterion should describe what assessors are looking for in concrete terms. "Community impact" is not specific enough. "Evidence that the project will reach a significant number of people in the target community, with credible participation projections" is.
Weight criteria to reflect their importance. If community reach is twice as important as project budget efficiency, weight it accordingly. Document the rationale for weighting decisions — this becomes part of the programme's accountability documentation.
Test criteria before using them. Before a round opens, run a small number of test applications through the criteria and scoring process with staff acting as assessors. Does the scoring process produce results that match staff judgement? If not, the criteria need revision.
Align the application form with the criteria. Applicants should be able to see clearly how the information they are providing maps to the criteria on which they will be assessed. A form that asks questions not connected to any criterion wastes applicant time. A criterion that cannot be assessed from the application form is unusable.
The relationship between assessment criteria and your grants management system is closer than it might appear. The system should:
Capture criteria scores against specific applications. Each assessor's score for each criterion should be recorded as a discrete data point — not a total score, but the individual criterion scores. This allows analysis of where applications diverged across assessors, and produces a complete decision record.
Present criteria consistently to assessors. Assessors should see the same criteria in the same format, with the same guidance text, regardless of where or when they are completing their assessment. A system where criteria are communicated via PDF or email, and scores are collected via spreadsheet, creates inconsistency.
Support multi-assessor scoring without cross-contamination. Assessors should not be able to see each other's scores before they have submitted their own. A system that reveals scores as they are submitted creates social influence effects that undermine the independence of assessment.
Record assessor commentary. Numerical scores alone are insufficient for a defensible decision record. Assessors should be able to record notes against each criterion. This commentary is part of the assessment record and should be stored in the system.
Support panel review. After individual scoring, a panel review process allows assessors to compare scores, discuss borderline cases, and arrive at a final recommendation. The panel's discussion and recommendation should be documented in the system.
Produce weighted totals automatically. If criteria are weighted, the system should calculate weighted scores automatically — not via a spreadsheet formula that someone might miscalculate or misconfigure.
For government funders, Crown entities, and others with significant accountability requirements, the assessment record is the basis for accountability. An OIA request may ask for the criteria used, the scores for each application, the assessors' reasoning, and the basis for the final decision.
A grants management system that produces this record automatically — because assessors complete their scoring inside the system — is qualitatively different from one where the record has to be reconstructed from spreadsheets and email threads after the fact.
The assessment criteria and the record of how they were applied are inseparable. Criteria that were well-designed but poorly documented are almost as problematic as criteria that were poorly designed.
For funders reviewing their assessment process, Tahua's assessment features are designed around the documentation requirements of compliance-heavy programmes. To see how assessment criteria are configured and scored in Tahua.
**.