Running a Grant Assessment Panel: A Practical Guide for Convenors

The convenor's role in a grant assessment panel is often described as facilitation. That framing undersells the responsibility. A panel convenor is not there to chair a discussion — they are accountable for the integrity of a decision-making process that will determine how public or philanthropic funds are allocated. When that process is later questioned under an OIA request, a Board review, or an audit, the convenor is the person who must demonstrate that it was run correctly.

This guide is for experienced grants managers who already understand assessment in principle. It focuses on the specifics: what preparation is required before a panel meets, how to run scoring and deliberation as distinct processes, what must be captured versus what is optional in the panel record, and what it means in practice for a panel process to be audit-ready.

What the convenor is actually responsible for

Before logistics: scope. The convenor is responsible for the integrity of the entire panel process, not just the meeting itself. That includes:

  • Confirming that assessors have appropriate expertise and independence before the panel is formed
  • Ensuring the COI declaration process is completed correctly, and that any declared conflicts are enforced — not just noted
  • Managing the pre-panel preparation so that assessors arrive informed and ready to score, rather than reading applications at the table
  • Separating the scoring phase from the deliberation phase, and maintaining that separation under time pressure
  • Producing a final recommendation record that is complete, defensible, and accurate

Convenors who treat the role as "keeping the meeting on track" tend to discover its full scope only when something goes wrong.

Panel composition: expertise, independence, and specialist networks

Panel composition is a design decision with probity implications. The most common tension is between expertise and independence: the assessors most qualified to review specialist applications — health researchers evaluating biomedical grants, Māori cultural practitioners assessing language revitalisation projects, engineers reviewing infrastructure proposals — often have professional relationships with applicants.

This does not mean specialist assessors cannot sit on panels. It means COI management for those panels needs to be more rigorous, not less. The assessment of a small sector — health research, performing arts, early-stage technology — typically involves a community of practitioners who all know each other. The convenor should assume that conflicts will be declared and plan the panel accordingly: enough assessors that recusals can be absorbed, and a COI process robust enough to handle partial recusal (an assessor scoring some applications but not others).

For panels drawing from academic networks, the additional complication is institutional affiliation: an assessor from a university that has submitted applications faces a category of potential conflicts distinct from personal relationships. Programme policy should define whether institutional affiliation alone requires declaration, or only where the assessor has a direct role in the application.

The panel should also include someone whose role is explicitly to assess organisational and financial capability — not just programme merit. Many grant panels are expert in the subject matter but not in whether the applying organisation is capable of delivering. These are different competencies.

COI management before the panel starts

COI management has three phases, and most programmes only run the first one well.

Declaration. Every assessor must complete a declaration before they receive any application materials. The declaration should be application-specific: assessors are given the list of applicants (not the applications themselves) and asked to identify any relationships. A generic "I have no conflicts" declaration is not adequate because it does not require the assessor to consider each applicant specifically.

Review. The convenor (and often the programme manager) reviews each declaration for completeness. This is not a rubber stamp. If an assessor declares a relationship with one applicant but the convenor knows of an additional connection the assessor has not mentioned, the convenor has an obligation to follow up. It is also good practice to check declared conflicts against the application list — occasionally a conflict is declared against an applicant who has not submitted in the current round, which suggests the assessor worked from a previous round's list.

Enforcement. A declared conflict must result in the assessor being excluded from all aspects of the assessment for that application: they do not receive the application, they do not score it, and they are not present for deliberation on it. If this is being managed manually — which it should not be, but often is — the convenor needs a reliable system for tracking which assessor has which recusals and for confirming before scores are collected that no conflicted assessor has submitted a score for a restricted application.

A common failure mode: the conflict is declared, the assessor is noted as recused, and then — in the press of a long panel day — the assessor remains in the room during deliberation on the conflicted application because asking them to leave feels awkward. That is a process failure. The discomfort of asking an assessor to step out is not a reason to compromise the integrity of the recusal.

The pre-panel pack: what assessors need and how to distribute it securely

Assessors who arrive at a panel having not read the applications are a problem that compounds throughout the day. Scoring during the panel meeting — rather than before it — is slower, less considered, and more susceptible to influence from discussion. The pre-panel pack is the mechanism that prevents this.

A well-constructed pre-panel pack includes: the application documents for each application assigned to that assessor, the scoring rubric with anchor descriptions for each criterion, the funding programme's objectives and any policy context the assessor needs, the COI declaration form (for completion before they access the application documents), and clear instructions on the expected scoring process — including the instruction that scoring should be completed independently before the panel meets.

Security matters. Application materials are often commercially or personally sensitive. Distribution by email is common but creates uncontrolled copies. Where a grants management platform supports it, assessors should access their pack through a secure portal where their access is logged, where they cannot see other assessors' materials or scores, and where the platform enforces any COI restrictions (an assessor with a declared conflict cannot open the application they are recused from).

Give assessors enough time. One to two weeks before the panel date is a reasonable minimum for a normal round. Assessors who are given three days and a large application pool will skim. The scores will show it.

Running the scoring round: individual assessment before group deliberation

The principle of independent scoring before collective deliberation is well established in assessment design. In practice, it is frequently compromised.

The failure mode is not usually overt — it is scheduling. If assessors are invited to a full-day panel where the morning is "individual scoring" and the afternoon is "panel discussion," the two phases will bleed together. Assessors will compare notes at lunch. Stronger personalities will share their views during breaks. The nominally independent scores collected at the end of the morning will be partially shaped by those interactions.

The more defensible approach is to collect scores before the panel meets. Assessors complete their scoring in the week before the panel, submit scores through the platform (or return completed rubric sheets by a specified deadline), and the convenor prepares an aggregate score summary before the deliberation session. The deliberation session then has a specific and bounded purpose: to review aggregate scores, discuss high-variance applications, and reach recommendations on borderline cases.

This approach also gives the convenor diagnostic information before the panel. If two assessors are consistently scoring 15-20% higher or lower than their peers, that is worth addressing in the briefing at the start of deliberations — not as a criticism, but as a calibration check. Assessors using different mental benchmarks is not unusual; discussing it explicitly before deliberation reduces the effect.

Facilitated deliberation: purpose, process, and documentation

Deliberation is not scoring. Its purpose is to resolve genuine interpretive disagreements, surface material information that assessors know and that others should have access to, and reach a collective recommendation on borderline cases.

It is not an opportunity to relitigate individual scores wholesale, to advocate for favoured applications on grounds not captured in the rubric, or to produce a different ranked list through discussion when the scoring already produced one. Convenors who allow deliberation to drift into advocacy rather than resolution end up with a panel process in which the scores are treated as advisory and the real decisions are made informally.

The convenor should open deliberation by presenting the aggregate score summary, identifying which applications are comfortably above threshold, which are comfortably below, and which are in the margin that requires discussion. Deliberation focuses on the marginal cases. Applications well above or below threshold generally do not require extended discussion.

When deliberation surfaces a material consideration that changes a recommendation — "this organisation has just entered administration" or "this project duplicates work already funded in the same region" — that information, and the reasoning for how it affected the recommendation, must be documented. It should appear in the final recommendation record as a clearly labelled post-scoring consideration, not quietly folded into the scoring summary.

Disagreement among panel members is normal and should not be suppressed. When the panel cannot reach consensus, the convenor has options: note the dissent in the recommendation record, escalate to the programme manager for a decision, or — in programmes with a clear escalation protocol — refer to a second-tier assessment. What is not appropriate is to pretend consensus exists when it does not.

The final recommendation record: what must be captured

The minimum required for a defensible recommendation record is:

  • The list of applications considered, with their unique identifiers
  • The names and roles of all panel members, including who was recused from which applications and why
  • Individual assessor scores by criterion for each application
  • The aggregate scores and ranking produced by the scoring phase
  • A record of any applications where deliberation produced a recommendation different from the ranked order, with documented reasoning
  • The panel's final recommended list, clearly distinguished as a recommendation (rather than a funding decision — that usually rests with a governance body or delegate)
  • The date the panel met and the date the record was finalised

What is optional but often valuable: a brief qualitative summary for each application in the marginal zone, capturing the main points of deliberation; a note of any scoring calibration issues discussed at the panel; and a convenor's notes on any unusual procedural matters that arose.

Do not wait until after the funding decision is made to finalise the panel record. Finalise it immediately after the panel, while the details are current. Convenors who try to reconstruct the record from notes and memory two weeks later produce weaker documentation and introduce risk.

Post-panel obligations: before the record is closed

Before the panel record is finalised and forwarded to the decision-maker, the convenor should complete three checks.

Scoring review. Verify that no arithmetic or data-entry errors have affected the ranking. In a spreadsheet-based process this is a manual calculation check. In a platform-based process it should be automatic, but it is still worth confirming that the scores were submitted correctly — not that an assessor's screen showed the right number, but that the scores that appear in the recommendation record match the scores the assessors intended to submit.

COI compliance confirmation. Before the record is closed, confirm that no assessor with a declared conflict has any scores recorded against the application they were recused from. If your process is system-enforced this should be automatic. If it is manually managed, this is a required step, not an optional one.

Record completeness. Check that all required elements of the recommendation record are present and accurate — assessor names, recusals, scores, deliberation notes, final recommendation. A record that reaches the decision-maker missing any of these elements may require the panel to be reconvened or supplemented, which creates delay and reputational risk.

What makes a panel process OIA-ready and audit-ready

A panel process is OIA-ready if the documentation it produces can answer, specifically and accurately, any reasonable question about how a funding decision was reached. The key questions are: who assessed each application, what scores they gave, whether any conflicts were present and how they were managed, what deliberation took place, and how the recommended list was reached from the scores.

An audit-ready process is the same, with the additional requirement that the documentation was created contemporaneously — not reconstructed after the fact — and that it is held in a form that can be produced intact and in context.

The practical implication is that a panel process run through email, spreadsheets, and meeting minutes is harder to make audit-ready than one run through a platform that generates the assessment record automatically. This is not because spreadsheets are inherently inadequate — it is because the work of documenting and maintaining the record falls entirely on the convenor rather than on the system, and under end-of-panel time pressure, that work gets compressed.

The difference between an adequate record and an audit-ready one is often not the quality of the information — it is whether that information is structured, accessible, and clearly linked to the decisions it supports.


If your programme is due for a process review or you are setting up a new contestable round, the government grants management page covers how purpose-built infrastructure supports panel probity at scale. To see how Tahua handles your specific programme context, book a demo.