Blind peer review has a well-established evidence base in academic publishing, and that evidence has led many research funders to adopt it for grant assessment. The logic is straightforward: if assessors do not know who submitted the application, they cannot favour or disfavour the applicant on the basis of name, institutional affiliation, or professional reputation.
That logic is sound as far as it goes. The problem is that it does not go as far as most funders assume. Blind review is a targeted intervention for a specific category of bias. Understanding exactly which biases it addresses — and which ones it does not — is more useful than treating it as a general quality signal that improves assessment outcomes across the board.
Blind review in grants assessment removes applicant-identifying information — typically the lead investigator's name, affiliated institution, and any biographical details — from the version of the application that assessors see. The programme administrator retains full access to all application details. The grantee is fully identified in the contract and payment processes. Blind review applies only to the scoring phase.
The category of bias it addresses is identity-based bias during assessment: the tendency for assessors to score more favourably when they recognise a well-regarded researcher or institution, or less favourably when the applicant is from a less prestigious institution or is personally unknown to the panel. Research on this effect in peer review is consistent: name and institutional affiliation influence scores in ways that are not justified by the quality of the work.
What blind review does not address:
Funding history bias. An assessor who has reviewed grants in a particular sub-field for several years may recognise a body of work from its methodology or writing style even when the name is removed. If the content of the application references the applicant's prior projects — which is often legitimate and expected — the applicant's identity may be inferred.
Panel composition bias. The demographic and professional profile of your panel shapes the range of perspectives applied to applications. Blind review does not change who is on your panel.
Methodological bias. Assessors trained in particular research traditions may systematically rate applications using different methodological approaches less favourably. Blind review does not affect this.
Criteria design bias. If your scoring rubric weights criteria that correlate with institutional resources (scale of preliminary data, breadth of team expertise), better-resourced institutions will score higher on those criteria regardless of whether names are removed.
None of this is an argument against blind review. It is an argument for clarity about what you are actually fixing when you implement it.
Blind review is most appropriate when:
Blind review is less appropriate or harder to implement when:
For research funders like the Neurological Foundation and Cure Kids, which operate in defined fields with specialist assessors, the appropriateness of blind review needs to be evaluated on a round-by-round basis rather than adopted as a standing policy.
When configuring blind review, the first step is identifying every field in the application that carries identifying information. This is more extensive than most programme managers initially assume.
Obvious identifying fields:
- Lead investigator name and biography
- Affiliated institution
- Contact details
- CV or supporting documentation with names
Less obvious identifying fields:
- Prior grant history (if the funder's own programmes are referenced)
- Acknowledgement sections referencing collaborators
- Project titles that include the researcher's name or lab name
- Institutional ethics approval references that include the institution name
- Grant reference numbers from prior funding rounds
What you cannot fully anonymise:
- Writing style and voice — experienced assessors in a field may recognise a specific researcher's work
- Methodological signatures — particularly distinctive methodological approaches
- Geographic references where the location is integral to the project
- Organisational capacity sections that describe institutional infrastructure
The practical approach is to anonymise the fields you can control systematically (name, institution, CV) and to include guidance in the applicant instructions that they should not reference their own identity in the body of the application. That guidance needs to be clear, specific, and enforced at the intake stage — a submitted application that contains substantial identifying information in the body text needs to be flagged before it enters the assessment queue, not after blind review has been configured.
In a purpose-built grants management system, blind review is a round-level configuration setting. When it is enabled, the system generates two versions of each application: the full version (visible to programme administrators) and the anonymised version (visible to assessors). The anonymisation is applied to the designated fields; all other application content is unchanged.
In Tahua, the administrator's view always shows complete application details regardless of whether blind review is configured. The assessor's view shows the anonymised version. The programme manager can see which fields have been anonymised and can review the anonymised version to confirm it does not retain identifying information before the assessment phase opens.
This architecture matters for practical management reasons. During the assessment phase, the programme manager may need to cross-reference an assessor's query against the full application record. If blind review means the administrator also loses access to identifying information, that creates operational problems. The administrator always needs full visibility; the restriction applies only to assessors.
The configuration process should also specify: which panel roles are subject to blind review (typically assessors, not the convenor); whether blind review applies to all applications in the round or only to a subset; and how attachments are handled (supporting documents that contain names in their content or metadata need separate management).
The panel convenor occupies a middle position in blind review configuration that is worth defining explicitly. The convenor is responsible for managing the panel process — assigning applications to assessors, managing conflict of interest declarations, handling queries, and overseeing the scoring workflow. To do that job, the convenor typically needs access to full application details.
In practice, this means the convenor is usually configured as an administrator rather than an assessor for blind review purposes. They can see applicant identity; assessors cannot. That distinction needs to be explicit in the round configuration and understood by all panel members, because an assessor who knows the convenor has full visibility and assumes that information flows freely between them has a reasonable basis for concern about whether the blind review is genuinely blind.
The protocol should be explicit: the convenor does not share identifying information with assessors during the assessment phase. Any queries from assessors about application content should be answered without introducing identifying information. If an assessor's query can only be answered by disclosing applicant identity, the decision about whether to answer it (and thereby effectively end the blind review for that application) should be made by the programme manager, not the convenor alone.
Conflict of interest declarations in a blind review context add a layer of complexity. An assessor who cannot see an applicant's name cannot declare a conflict against them by name. The standard approach is to collect COI declarations before the blind review phase begins, when assessors have access to the full applicant list, or to collect them against anonymised application identifiers with a process for the convenor to cross-reference against the full list.
Blind review breaks down most commonly in these scenarios:
Small applicant pools. A funding programme in a specialist area may receive 20 applications from a pool of 40 eligible researchers. Most assessors will know most applicants. Removing names does not remove the identity — it just makes the assessment process more awkward without reducing bias.
Self-referential applications. Applicants who describe their own prior work in detail will often make identification straightforward for any assessor familiar with the field. This is particularly common in research continuity grants where describing prior work is a required part of the application.
Inconsistent anonymisation. If the applicant's name is removed from the form fields but their institutional letterhead appears on attached documents, or their email address appears in a supporting letter, the anonymisation is incomplete. The process for handling attachments and supporting documents needs to be as rigorous as the process for handling form fields.
Late-stage panel discussion. If assessors have scored applications blind but then discuss rankings in a panel session where identifying information is shared, the blind review has covered the scoring phase but not the deliberation phase. For most funders this is acceptable (the scores are the primary record), but the protocol should be explicit about what is and is not blind.
Understanding these edge cases is important for setting realistic expectations. Blind review that is implemented as a policy commitment but is not consistently operational in practice creates a compliance risk: if a challenged decision requires the funder to demonstrate that blind review was properly applied, partial or inconsistent anonymisation is worse than no blind review at all.
Blind review and weighted scoring rubrics address different problems and are more effective in combination than either is alone. Blind review addresses who the applicant is; rubrics address how the application is scored. Removing identity bias from a poorly specified scoring instrument still produces inconsistent and potentially unfair results.
A weighted rubric — one that assigns specific percentage weights to criteria like Innovation (40%), Methodology (30%), and Team (30%) — forces assessors to evaluate each criterion separately before arriving at an overall score. That structure makes it harder for a general impression (positive or negative) about an application to inflate or deflate scores across all criteria, because each criterion has to be scored independently.
The combination of blind review and a well-specified weighted rubric addresses the two most common sources of individual assessor bias: who the applicant is, and the halo/horn effect that causes assessors to rate all criteria highly or poorly based on an overall impression. Neither tool is sufficient on its own. Together, they materially improve the reliability and defensibility of assessment outcomes.
For research funders designing a scoring framework, the rubric design question is at least as important as the blind review question. The criteria need to be specific enough to produce different scores for different applications. Criteria that are too broad — "scientific quality" without further definition — function as an invitation for assessors to score based on their general impression of the research, which is precisely what blind review is trying to prevent.
For detailed guidance on designing scoring rubrics that produce comparable, defensible results, see our guide to weighted scoring rubrics for research grants.
To see how blind review and weighted scoring rubrics are configured in Tahua, book a demo.