Shortlisting is where most grants programmes have the most process risk. It's the stage where large volumes of applications need to be reduced to a manageable competitive pool — quickly, consistently, and in a way that can be defended if challenged.
Done well, shortlisting is an efficient, auditable step that feeds a strong deliberation. Done poorly, it's where good applications get lost and where bias enters the process unchecked.
Here's a step-by-step shortlisting process that works for programmes at most scales.
Shortlisting should not include eligibility checking. Those are different steps, and mixing them creates confusion and inconsistency.
Before applications reach assessors, programme staff should screen every application against hard eligibility criteria: entity type, geographic scope, project dates, minimum and maximum funding amounts, required attachments. Applications that don't meet eligibility requirements are declined without entering the scoring process.
Document every eligibility decline with the specific criterion failed. This matters if the decision is challenged.
Only eligible applications proceed to shortlisting.
Before scoring begins, define how many applications you expect to shortlist for full assessment. This is typically two to three times the number of grants you expect to fund — enough to give assessors meaningful choice, not so many that deliberation becomes unmanageable.
For a programme expecting to fund 20 grants, a shortlist of 40–60 is reasonable. For a programme funding 5 large grants, a shortlist of 12–15.
Having a target doesn't mean cutting mechanically at a score threshold. It means you have a sense of what you're aiming for when you review scoring distributions.
There are two common models for the first scoring pass, and the right choice depends on your application volume.
Full panel scoring (all assessors review all applications): Works well for lower volumes (under 100 applications). Every application gets multiple scores from the start. More rigorous but more time-intensive.
Lead reviewer model (one assessor per application for initial screen): Works better for high volumes (100+ applications). Each application gets one reviewer for an initial score. Applications above a threshold go to full panel scoring; applications below are declined. Faster, but relies on individual judgement for the initial cut.
If you use a lead reviewer model, build in a spot-check: have a second reviewer independently score a random sample (10–15%) and compare. If there are significant discrepancies, investigate before proceeding.
Before ranking applications by total score, apply threshold filters. These are minimum scores on critical criteria below which applications are not funded regardless of their total.
Common thresholds:
- Minimum score on organisational capability (to filter out organisations unlikely to deliver)
- Minimum score on need or rationale (to filter out applications not addressing your programme objectives)
- Minimum total score (to create a floor for overall quality)
Remove applications below any threshold from the shortlist. Don't adjust total scores or apply discretion here — threshold rules should be mechanical and pre-defined.
Rank applications that passed threshold filters by their weighted total score. This gives you an ordered list from strongest to weakest.
Plot this on a simple distribution chart. Most scoring distributions show a cluster of strong applications, a competitive middle band, and a tail of weaker ones. The natural break points in the distribution are often clearer than a mechanical score cutoff.
Aim to cut at a natural break point rather than a round number. Cutting at 72 because that's where the cluster ends is more defensible than cutting at 70 because it's a round number.
Don't shortlist mechanically by score alone. Applications that fall in the competitive borderline band — typically within five to ten percentage points of your target cutoff — deserve panel review before a shortlist is finalised.
In a panel review of the borderline band:
- Share the distribution and identify the borderline applications
- Have each assessor flag applications they want to discuss
- Discuss flagged applications briefly — focus on whether scores reflect the application quality or whether there are extenuating factors (assessor unfamiliarity with the sector, missing context in the application)
- Make shortlist decisions collectively for borderline cases and document the reasoning
This step typically takes one to two hours. It catches errors from the automated ranking and ensures the shortlist reflects genuine panel judgement.
Before finalising, review the shortlist against your programme's portfolio objectives:
This is not the same as overriding scores based on demographics. It's checking whether your scoring process is producing a shortlist consistent with your programme's stated objectives. If the shortlist is heavily skewed in a direction inconsistent with your objectives, it's worth investigating why before proceeding — you may have a criterion that's unintentionally disadvantaging particular applicant types.
Communicate shortlist outcomes promptly and clearly. For declined applicants at this stage, a brief note with the reason for non-shortlisting is good practice — not a detailed score breakdown, but enough for the applicant to understand why they didn't progress.
For shortlisted applicants, communicate what comes next: full assessment timeline, whether additional information will be requested, and when they can expect a final decision.
Clear communication at the shortlist stage reduces follow-up queries and sets the tone for a professional process.
For audit and accountability purposes, document:
- The eligibility screening outcomes (how many screened, how many declined, on what grounds)
- The scoring approach and who scored what
- The threshold criteria applied and how many applications failed each threshold
- The distribution of scores and where the shortlist cutoff was applied
- Any borderline decisions made by the panel, with brief rationale
- The composition of the shortlist
This documentation doesn't need to be elaborate. A one-page shortlist report covering these points is sufficient for most programmes and provides a defensible record of the process.
This article is part of the complete guide: How to Evaluate 500 Grant Applications Without Burning Out Your Team.