How a funder assesses and selects grants is one of the most consequential aspects of grantmaking. Poor assessment — inconsistent criteria, biased panels, opaque decisions — undermines trust, produces worse funding outcomes, and creates legal and reputational risk. Well-designed selection criteria create consistency, defensibility, and fairness. This guide covers how funders design and implement effective grant selection frameworks.
Grant selection criteria serve multiple purposes:
- Consistency: Ensure all applications are assessed against the same standards
- Transparency: Tell applicants what will be assessed, so they can respond appropriately
- Defensibility: Create an evidence trail for funding decisions that can be reviewed and explained
- Alignment: Ensure funded grants match the programme's strategic priorities
- Equity: Apply the same standards regardless of the assessor's relationship with the applicant
Criteria without these properties are just window dressing — they don't actually shape decisions.
Eligibility criteria (pass/fail)
Criteria that must be met for an application to proceed to assessment:
- Organisation type (registered charity, incorporated society)
- Geographic focus (grants in a defined region)
- Activity type (eligible project types)
- Grant size (minimum and maximum)
Eligibility screening should happen before substantive assessment to avoid wasting assessor time on ineligible applications.
Assessment criteria (scored)
Criteria that determine how good an application is, relative to others. Common categories:
Threshold criteria
Some criteria may have minimum thresholds that must be exceeded for an application to be funded, even if the overall score is high:
- A minimum governance score
- A minimum organisational capacity score
This prevents funding projects at organisations that are fundamentally unable to manage a grant.
A rubric describes what different score levels look like for each criterion. Without a rubric, scores mean different things to different assessors.
Example rubric for "Outcomes and Impact":
| Score | Description |
|---|---|
| 5 | Compelling, evidence-based theory of change; specific, measurable outcomes; realistic attribution; clear evidence of need |
| 4 | Sound theory of change; measurable outcomes; some evidence base; need is clear |
| 3 | Theory of change present but some gaps; outcomes partially measurable; limited evidence |
| 2 | Weak theory of change; vague outcomes; limited evidence of need |
| 1 | No theory of change; outputs confused with outcomes; no evidence of need |
A rubric lets assessors calibrate their scoring and makes inter-rater comparison meaningful.
Not all criteria should carry equal weight. A funder whose primary priority is organisational capacity might weight capacity higher than impact. Weighting should reflect the programme's actual priorities.
Example weighting:
- Need and rationale: 15%
- Outcomes and impact: 30%
- Organisational capacity: 25%
- Budget: 15%
- Sustainability: 15%
Publish the weighting to applicants so they know where to focus.
Assessment panels will encounter conflicts of interest. A conflict exists when an assessor has a personal, financial, or professional relationship with an applicant.
Best practice:
- Require assessors to declare all potential conflicts before assessment begins
- Have a clear policy on what constitutes a conflict (employment, board membership, close friendship, financial relationship)
- Require recusal from assessing conflicted applications
- Document all declarations and recusals
- Have a mechanism for assessing applications where most panel members have conflicts
Diverse panels produce better decisions — include a mix of sector expertise, community knowledge, lived experience, and professional skill.
Calibration — aligning panel members' understanding of criteria before they score independently — improves inter-rater reliability. Practice with sample applications before the real assessment.
Independent vs. consensus scoring:
- Independent scoring (each assessor scores without discussion) reduces conformity bias
- Consensus scoring (panel discusses and agrees) works well for smaller panels
Both approaches need discussion of significant discrepancies before finalising scores.
Document the assessment process:
- Record individual assessor scores (even in consensus processes, retain individual assessments)
- Document panel discussion notes
- Record the rationale for borderline decisions
- Retain all assessment materials as part of the grant record
This documentation serves three purposes:
1. Enables quality review of decisions
2. Creates the evidence base for feedback to unsuccessful applicants
3. Provides protection if decisions are challenged
Providing feedback to unsuccessful applicants is one of the most significant improvements funders can make. It:
- Helps applicants improve future applications
- Demonstrates respect for the effort applicants invested
- Contributes to a more capable sector
- Reduces repeat applications from organisations that fundamentally don't meet criteria
Feedback doesn't need to be extensive — two to three key points on why the application wasn't funded is sufficient for most programmes.
Pure merit assessment — funding the highest-scoring applications regardless of portfolio considerations — can produce imbalanced portfolios. Funders may deliberately balance the portfolio by:
- Geography (not all grants to major cities)
- Organisation size (some investment in smaller organisations)
- Sector focus (not all grants to the same issue area)
Portfolio balancing should be explicit and disclosed rather than hidden. Applicants should know if portfolio considerations will apply alongside merit assessment.
Tahua's grants management platform provides configurable assessment workflows — scoring rubrics, panel management, conflict of interest declaration, and decision documentation — that help funders run consistent and defensible selection processes.