A scoring rubric is the backbone of a fair grant assessment process. Without one, assessors default to gut feel — and gut feel is inconsistent, hard to explain, and impossible to defend when an unsuccessful applicant challenges your decision.
With a well-designed rubric, assessors score against the same criteria, using the same scale, with the same understanding of what each score means. Decisions become more consistent, rationale becomes documentable, and the programme becomes more defensible.
Here's how to build one.
The biggest mistake in rubric design is starting from a template and reverse-engineering it to your programme. Rubrics need to reflect your specific grant objectives — what you're trying to achieve, who you're trying to fund, and what success looks like.
Before you write a single criterion, answer these questions:
- What is the primary purpose of this grant? (Capacity building? Innovation? Service delivery? Capital projects?)
- What does a strong application look like in your context?
- What would automatically disqualify an application?
- Are there population or geographic priorities that should influence scoring?
Your criteria should flow directly from these answers. A rubric for a community arts fund will look completely different from one for an infrastructure grant or a research programme.
Too few criteria and your rubric isn't capturing enough nuance. Too many and assessors lose focus and scores become noisy.
Most grant programmes work well with four to seven criteria. A typical structure might include:
You may add or remove criteria based on your programme. A small community grants round might have three criteria. A major multi-year funding programme might have seven.
A scale of 1–5 is meaningless unless assessors know what 3 means. Without score definitions, two assessors can give the same application a 3 and mean completely different things.
For each criterion, write brief descriptors for each score level. These don't need to be long — two or three sentences each is enough. The goal is to anchor assessors to shared expectations.
An example for "Organisational Capability" on a 1–5 scale:
| Score | Descriptor |
|---|---|
| 5 | Strong track record delivering similar work; relevant staff and governance clearly demonstrated |
| 4 | Good evidence of capability with minor gaps; manageable risk |
| 3 | Some evidence of capability but meaningful gaps or uncertainty |
| 2 | Limited evidence; significant concerns about delivery capacity |
| 1 | Little or no evidence of capability; high delivery risk |
Write descriptors for every criterion before your assessment panel reviews a single application. This is the calibration work that makes scoring consistent.
Not all criteria are equally important. A programme focused on transformative community impact might weight outcomes more heavily than budget efficiency. A capital grants programme might weight organisational capability most heavily.
Set your weights before assessment begins, and make them explicit to assessors. A common approach:
Document the weighting rationale. If you're challenged on a funding decision, "Criterion X was weighted at 25% because it reflects the programme's primary objective" is a defensible answer. "We thought it was more important" is not.
A threshold is a minimum score below which applications are not funded regardless of their total score.
Without a threshold, an application that scores very strongly on budget and strategic fit can end up funded despite a very low score on organisational capability — meaning you're funding a project the applicant is unlikely to be able to deliver.
A common approach is to set a minimum score on your most critical criteria (often need and capability) as well as a minimum total score. Applications that don't meet the threshold go to a decline pile without detailed deliberation — saving panel time for the competitive applications.
Before your assessment panel sees real applications, run a calibration exercise. Select two or three applications from a previous round (or write anonymised test cases) and have assessors score them independently using the rubric.
Then compare scores. Where are the gaps? If two assessors score the same application a 2 and a 5 on the same criterion, your descriptor for that criterion isn't clear enough. Work through the differences as a group to refine the definitions.
One calibration session before each round prevents a significant amount of inconsistency and is worth the hour it takes.
This sounds obvious but is frequently overlooked. Assessors need to be able to see the rubric while they're reading applications — ideally in the same interface.
If your assessment happens in a spreadsheet or email attachment, assessors are jumping between the rubric document, the application, and the scoring form. Errors and shortcuts follow.
Grants management software that presents the rubric alongside the application — or builds it directly into the scoring form — significantly reduces this problem.
A score without rationale isn't useful for appeals or future learning. Require assessors to write a brief note for each criterion — two or three sentences explaining why they gave the score they did.
This serves multiple functions: it forces assessors to engage more carefully with each criterion, it produces a written record of the decision rationale, and it gives unsuccessful applicants useful feedback when you communicate outcomes.
The rationale doesn't need to be long. "Strong need demonstrated with community data; applicant has identified the problem clearly and linked it to proposed activities" is sufficient.
The first version of your rubric won't be perfect. After each grant round, review scoring data to identify where the rubric is working and where it isn't.
Signs that a criterion needs revision: assessors consistently score it the same (everyone gives a 4), scores on it have no relationship to application quality, or assessors frequently ask clarifying questions about what it means.
Continuous improvement on your rubric is one of the highest-leverage investments a grants team can make.
This article is part of the complete guide: How to Evaluate 500 Grant Applications Without Burning Out Your Team.