The Audit Problem: What Government Grants Teams Get Wrong About Probity

The most common response to an audit finding on a grants programme is a policy update. A new declaration form. A revised process document. A reminder to staff about requirements at the beginning of the next round.

None of these things are wrong. But they address the surface of the problem, not the cause. The cause is that probity in most government grants programmes is managed through behavioural controls — things people are expected to remember to do — rather than through structural controls — things the system requires before the process can continue.

Behavioural controls fail when people are busy, when staff change, when a programme officer is processing 200 applications in four weeks, or when the conflict of interest declaration form is sent in the same email as the assessment pack and the assessor inadvertently opens the wrong attachment first. Structural controls fail only when the system fails. The two are not equivalent.

What auditors actually look at in a grants programme

An audit of a government grants programme is not primarily an audit of outcomes. It is an audit of process integrity — the question is not "did these grants achieve their intended purpose?" but "was the process used to select and administer these grants conducted fairly, consistently, and in accordance with the stated criteria and obligations?"

The specific things auditors examine:

Decision records. Is there a documented record of who made each funding decision, on what basis, using what assessment criteria, and what the outcome was? For declined applications, is the reason for decline recorded and consistent with the published criteria?

Conflict of interest management. Did every assessor complete a declaration? Were declared conflicts acted upon — meaning conflicted assessors were removed from those applications? Is there a record of who made the decision to recuse and when?

Process consistency. Was the same process applied to all applications in the round? Were there exceptions, and if so, were they documented and authorised?

Communication records. Are outbound communications to applicants — decisions, requests for information, notifications — logged with timestamps? If an applicant claims they did not receive a decision letter, can you demonstrate it was sent?

Delegation and authorisation. For each funding decision and payment approval, was the person who acted within their delegated authority? Is that delegation on record?

When auditors cannot find these records, or find them inconsistently, the finding is not "the team was dishonest." The finding is "the process did not produce the evidence required to demonstrate that the process was followed." These findings are embarrassing, they trigger corrective action requirements, and in serious cases they result in programme reviews or funding suspensions.

The difference between having a probity policy and having a probity-grade process

A probity policy says what should happen. A probity-grade process is the sequence of steps and enforcement mechanisms that makes it happen — and that produces a record demonstrating that it happened.

Consider conflict of interest management. A probity policy says: "Assessors must declare any conflict of interest with applicants in the assessment pool before beginning their assessments. Assessors with declared conflicts will not assess the relevant applications."

That policy is sound. The question is what the process actually is.

If the process is: "Send assessors a COI declaration form by email; they return it completed; programme officer reviews returns and manually adjusts assessment packs if conflicts are declared" — then the policy is supported by a behavioural process. It depends on the assessor returning the form before opening their assessment pack. It depends on the programme officer reviewing every return before distributing assignments. It depends on the adjusted assessment pack being correctly prepared and no conflicted application accidentally included. Every one of those steps is a point where human error creates a probity risk.

If the process is: "Assessors log into the assessment portal and complete a structured COI declaration before their queue is available. When a conflict is declared, the system automatically removes that application from their queue and records the declaration, the assessor's identity, and the timestamp" — then the policy is supported by a structural process. The assessor cannot access their assignments until the declaration is complete. The conflicted application is removed without manual intervention. The record is automatic.

The policy is identical. The process integrity is not.

The three most common structural gaps in government grants administration

Gap 1: COI declarations are not linked to specific applications.

The most common COI process in government grants programmes asks assessors to declare any interests they have that might affect their impartiality — and records that declaration in a form that lives in a folder. The declaration is not linked to specific applications in the pool. It is not enforced at the application level. If an assessor declares a general interest in a sector and is then assigned an application from an organisation in that sector, the connection is only made if someone reviews both records simultaneously.

A probity-grade COI process links declarations to specific applicants, organisations, or individual projects. When a conflict is declared, the enforcement is application-specific and automatic. The record shows: this assessor, this application, this conflict, this action taken, this date.

Gap 2: Assessment records are not complete for declined applications.

Funded applications are well-documented. The contract, the payment record, and the milestone history create a paper trail that is relatively easy to reconstruct. Declined applications are often documented only by the assessment score and a decision letter — if the letter was archived at all.

Under the Official Information Act, a declined applicant can request the information used to make the decision about their application. If the only record is a score in a spreadsheet and a letter generated in Word, and the letter is not filed against the application, the response to that OIA request may be incomplete or may require reconstruction from multiple sources.

A complete assessment record for a declined application includes: the application as submitted, the assessment scores and score rationale from each assessor, the panel summary or deliberation record if applicable, the formal decision record, and the outbound communication to the applicant with timestamp. The record should be retrievable from the grants management system without requiring reconstruction.

Gap 3: Payment authorisation is not linked to the milestone or grant record.

Government grants programmes frequently have payment obligations managed by a finance system that is entirely separate from the grants management system. The grants team approves a milestone; the finance team processes a payment. The connection between the approval and the payment may exist only in an email, a shared spreadsheet entry, or an informal understanding.

When an auditor asks "was this payment authorised, and was the authorisation linked to an approved milestone?", the answer should be findable in the grants system and confirmed in the finance system. When the two systems are not connected and the authorisation trail lives in email threads, the answer is reconstructed after the fact — which creates risk both for the accuracy of the reconstruction and for the appearance of the process.

What a complete, OIA-ready probity record requires

An OIA-ready grants programme record — one that can respond to an information request about a specific application without requiring significant reconstruction — contains the following for each application processed:

  • The application as submitted, including all attachments.
  • Eligibility screening record: who assessed eligibility, what criteria were applied, what the outcome was.
  • Assessment records: which assessors were assigned, their scores by criterion, any scoring rationale they provided, and the date of submission.
  • COI declarations: each assessor's declaration for this application pool, any conflicts declared in relation to this specific application, and the action taken.
  • Panel or moderation record: if a panel reviewed scores, who participated, what was discussed, and what was resolved.
  • Decision record: the formal funding decision, attributed to the decision-maker, with the date and the basis for the decision.
  • Communications: all outbound communications to the applicant, with timestamps and content.
  • Post-award record (for funded applications): contract, milestone approvals, payment records, accountability reports.

For a government agency administering $60 million or more in public funds annually — as Te Māngai Pāho does — this record needs to be maintained at scale, consistently, and for multiple concurrent funding rounds. That is not achievable with a process dependent on individual staff behaviours and manual filing. It requires a system that produces the record as a natural output of the process, not as a retrospective task.

How to evaluate whether your current process would satisfy scrutiny

A practical test: choose three applications from your last completed round — one funded, one borderline, one declined — and attempt to retrieve the complete probity record for each one. Set yourself a limit of 15 minutes per application.

Can you locate the COI declarations for each assessor involved? Are they linked to the specific application or to the round generally? Can you retrieve the assessment scores and rationale? Is the panel or moderation record retrievable? Is the decision record attributed to a named person? Is the outbound communication archived with a timestamp?

If you cannot do this in 15 minutes without accessing multiple separate systems, email folders, or shared drives, your current process does not produce the record that probity-grade administration requires. It produces the evidence that the process existed, not the evidence that it was followed correctly for each application.

The gap between those two things is where audit findings live.

The case for redesigning the process before the next audit finding

Probity process redesign is easier to justify before an audit finding than after one. Before a finding, it is an improvement initiative. After a finding, it is a corrective action — which comes with a timeline, an OAG recommendation, and a Minister's expectation of a progress report.

The redesign does not require rebuilding everything at once. The most effective starting point is usually conflict of interest, because it is the highest-risk area and the most structurally fixable. Moving from a form-and-email COI process to a system that enforces declarations before granting access to assessment queues closes the most significant probity gap in most government grants programmes.

The second priority is typically the completeness of the declined application record — ensuring that every application that is not funded has a retrievable assessment and decision record, not just a score in a spreadsheet.

The third is the link between milestone approval and payment authorisation — creating a verifiable chain of custody between the grants management record and the finance system.

For government agencies and Crown entities, Tahua's government grants management capabilities are designed specifically for this environment: OIA-ready records, structural COI enforcement, and an audit trail where every action is timestamped and attributed to a named user.

For the definition of what a government-grade audit trail requires and the specific events it must capture, see What Is an Audit Trail in Grants Management — and What Does Government-Grade Actually Mean?.

If you are preparing for a review or redesigning your probity process ahead of the next round, book a 30-minute demo to see how a purpose-built system handles the structural requirements.