Most grant programmes invest heavily in designing the application process and almost nothing in preparing the people who review them. The result is assessment panels where each member applies different standards, interprets criteria differently, and scores applications according to their own unstated assumptions.
This is how good applications get declined and weak ones get funded. It's also how grant programmes end up with complaints, legal exposure, and reputational damage.
Assessor training isn't optional. Here's how to do it properly.
Research on expert judgement consistently shows that even trained professionals given the same information will reach significantly different conclusions without shared standards and calibration. Grant assessment is no different.
Consider two assessors reviewing the same application for organisational capability. One has a background in community development and considers a five-person team with lived experience highly capable. The other comes from a corporate background and considers the same team underpowered. Neither is wrong in the abstract — but without a shared standard, their scores will diverge dramatically.
Multiply this across a panel of five or six assessors and several hundred applications, and you have a system that produces scores with substantial random variation built in. That variation isn't reflecting the quality of the applications — it's reflecting the unaddressed differences between assessors.
Send assessors programme documentation before any applications are shared. This includes:
The programme brief: What is this grant trying to achieve? Who is eligible? What is the funding for? What does success look like?
The scoring rubric: The criteria, scale, and descriptors. Give assessors time to read this and ask questions before the calibration session.
The conflict of interest policy: Who has a conflict of interest, what counts as a conflict, and what the process is for declaring one. This needs to happen before assessors see applicant names.
The assessment timeline: When applications will be shared, when scores are due, how the panel deliberation works, and when decisions will be communicated to applicants.
The confidentiality requirements: What assessors can and cannot share outside the process.
Before assessors score real applications, run a calibration exercise. This is the single most effective intervention for improving assessor consistency.
How to run it:
Select two or three applications from a previous round, or write anonymised test cases that represent different quality levels (a strong application, a borderline application, and a weak one). Share these with assessors in advance.
In a one-hour session (in-person or video), have assessors share their scores for each test case and discuss the differences. The goal isn't consensus — assessors don't need to give the same score. The goal is shared understanding of what the criteria mean and what distinguishes a 4 from a 3.
Work through disagreements systematically. When two assessors score the same criterion a 2 and a 5, ask each to explain their reasoning. Usually this reveals either a misunderstanding of the criterion or a genuine difference in how the programme's priorities should be applied — which is useful information to surface before the real assessment begins.
Document any clarifications or shared understandings that emerge. Send a calibration summary to assessors after the session.
Assessors will have questions as they review applications. Provide a clear channel for these — a programme manager they can email or message — but be careful about the line between clarifying process questions and influencing scoring decisions.
It's appropriate to clarify: "Does organisational capability apply to the lead applicant only, or also to partners?"
It's not appropriate to hint: "That's an interesting question — the lead applicant does have limited experience, doesn't it?"
Programme managers who are tempted to steer assessors toward particular outcomes undermine the independence of the process. If you have strong views about specific applications, declare them and manage your own conflict of interest.
When scores are compiled, look for outliers — assessors whose scores are consistently much higher or lower than the panel average, or who score a specific application dramatically differently from other assessors.
Some variation is expected. Significant, consistent outliers are worth investigating.
Common causes:
- The assessor misunderstood a criterion (fixable with a brief clarification)
- The assessor has an undisclosed relationship with an applicant (requires formal process)
- The assessor is applying a significantly different threshold for what counts as "capable" or "evidence-based" (requires discussion)
Don't just average out outliers. Understand why they're there. An outlier score is often carrying important information about the application or the process.
After individual scoring is complete, a deliberation session brings the panel together to discuss borderline applications and reach funding recommendations.
Structure this carefully. Unstructured deliberation tends to be dominated by the most confident voices, which introduces a different kind of bias.
A useful format:
1. Share compiled scores before the meeting so assessors can identify the applications they want to discuss
2. Agree at the start which applications are clearly fundable (high scores, no significant disagreements) and which are clearly not — these don't need discussion time
3. Focus deliberation on the borderline applications
4. For each borderline application, ask each panel member to briefly state their view before open discussion begins
5. Document the reasoning behind recommendations, not just the decision
After decisions are made, give assessors brief feedback on how the round went. This doesn't need to be extensive — a short summary of how the funded portfolio looks, any calibration issues that emerged, and what's changing for next round.
Assessors who feel their contribution is valued and who see how their work connects to programme outcomes are more likely to engage carefully in future rounds. Those who feel like cogs in a process will do the minimum.
If you're using the same assessors across multiple rounds, build in a brief retrospective at the end of each — what worked in the assessment process, what didn't, and what they'd change. This is some of the most useful feedback a programme manager can get.
This article is part of the complete guide: How to Evaluate 500 Grant Applications Without Burning Out Your Team.