"Purpose-built for grants management" is a claim made by a wide range of software vendors. For most buyers, it means the system handles intake and assessment reasonably well. For government funders, it needs to mean something more specific than that.
Government agencies and Crown entities administering grants operate under accountability obligations that do not apply to private foundations or community trusts. OIA requests. Ministerial briefings. Audit requirements. Conflict-of-interest obligations that must be demonstrable, not merely claimed. The system that works adequately for a charitable foundation may fail silently for a government funder — not because it is bad software, but because it was not designed with those obligations in mind.
"Purpose-built for accountability" has a precise meaning in this context: the audit trail, COI management, decision documentation, and reporting capabilities were designed as core functions from the outset, not bolted on later. This article explains what that means in practice and how to evaluate whether a vendor's claims hold up.
The distinction between government and non-government funder requirements is not a matter of scale. A community foundation distributing $20M annually has complex needs. A government agency distributing $5M has different ones.
The difference is accountability architecture. Government funders operate under obligations that require them to demonstrate, on demand, the basis for every funding decision. An OIA request for "all correspondence and documentation related to the assessment of [organisation] in the [programme] round" requires a complete, retrievable record — not a search through email threads and spreadsheet versions.
Ministerial reporting requires accurate, timely data presented at a level of aggregation that is not readily extractable from an applicant-by-applicant spreadsheet. When a Minister needs to know the total value of commitments in a particular region, or the number of grants in the pipeline for the coming financial year, that information needs to be producible without a staff member spending two days pulling it together.
Audit requirements — from internal audit, from the Auditor-General, or from parliamentary select committees — require evidence that the process followed was the process documented. Not approximately. Not in general terms. Specifically: who assessed which application, when, with what score, and on what declared basis.
These requirements do not emerge from bureaucratic preference. They exist because public money is involved, and the public has a legitimate interest in knowing it was administered fairly. A software system that supports them is not more complex than one that doesn't — it is differently designed.
Complete decision provenance. Every action in the grant lifecycle should be timestamped and attributed to a named user. Not just decisions — drafts, status changes, assessor assignments, score submissions, and communications. When an auditor asks "who approved the payment for contract X and when?", the answer should be a three-second query, not a three-day investigation.
Systems that fail this requirement typically do so silently. They record current state without recording state transitions. You can see what the record looks like now; you cannot see the history of how it got there. This is adequate for programme management and catastrophic for accountability.
Assessor recusal documentation. In any government-funded programme with an assessment panel, conflicts of interest must be identified, declared, and documented before assessment begins. The documentation must be retrievable. "We asked assessors to declare conflicts" is not adequate. "Assessor X declared a conflict with Application Y on [date]; the system recorded this declaration and restricted her access accordingly" is adequate.
Most systems have a notes field or an email workflow for COI management. Neither produces the kind of attributable, timestamped record that satisfies an audit.
Decision letter traceability. Funding decision letters — whether approvals or declines — should be traceable to the specific assessment record and decision-maker. The letter sent should be stored against the application. If the letter referenced specific criteria or conditions, those should be queryable. When a declined applicant challenges a decision, or when a funder is asked to explain its reasoning, the decision letter alone is not sufficient — the chain from application to assessment to decision to letter must be intact.
For a detailed treatment of what government-grade audit trails require, see our article on what an audit trail in grants management means and what government-grade looks like.
Conflict-of-interest management is frequently listed as a feature in grants software marketing. In practice, there are two very different things being described under that label.
The first is a COI declaration workflow — a form that asks assessors to declare conflicts, and a field that records their response. This is documentation. It tells you that a declaration was requested. It does not tell you what happened as a result.
The second is COI enforcement — a system that, once a conflict is declared, automatically restricts the declaring assessor's access to that application's data, scores, and assessor notes. They cannot see the application. They cannot see other assessors' scores for it. They cannot influence the assessment, and the system records that they could not.
The difference between these two approaches is not cosmetic. In a government context, demonstrating that COI management occurred is not sufficient. You must be able to demonstrate that the declared conflict had a specific, verifiable consequence for the assessor's access to that decision.
Auto-recusal — where the declaration triggers access restriction automatically, without a programme officer having to manually revoke permissions — is the standard that matters. It removes the human error from the process and produces an unambiguous record: "Assessor declared conflict on [date]; access to Application X was restricted from that point."
Government programmes typically operate on funding cycles that align with fiscal years, electoral cycles, and ministerial priorities. Reporting to Ministers, select committees, and Treasury often occurs on a schedule that is not entirely predictable — a parliamentary question can arrive with 24 hours notice, a select committee appearance may require data that was not in the previous report cycle.
The implication for software is that the reporting capability needs to be self-service for programme staff, not dependent on vendor support or IT assistance. When a programme manager needs to produce a summary of all active contracts in a particular region, broken down by funding amount and stage, that should be a matter of minutes, not a support ticket.
This requires that the system's data model is designed for reporting, not just for transaction processing. A system designed to manage individual applications through a workflow is not necessarily designed to aggregate across applications — unless the data structure was built with that aggregation in mind.
Te Māngai Pāho, which administers a $60M+ annual programme across multiple funding rounds, has more than doubled both the number of funding rounds and the number of contracts under management in the last two years using Tahua — with the same team. That scale is only possible when programme staff are not spending their capacity on manual data aggregation and report compilation. As their CEO Larry Parr noted: "Our small team has more than doubled both the number of Funding Rounds and the number of contracts under management in the last two years, thanks to Tahua."
Similarly, NZ On Air uses Tahua to meet its parliamentary reporting obligations — a use case that requires accurate, timely data in a format that satisfies parliamentary standards, produced without significant manual intervention.
Data sovereignty is increasingly a material requirement for government procurement in New Zealand and Australia, not merely a preference. The question is not "where is the data hosted in principle?" but "can you demonstrate, with contractual specificity, that New Zealand government data does not leave New Zealand?" — or in the Australian context, that it stays within Australian jurisdiction.
This matters for three reasons. First, some government data classifications carry explicit requirements about jurisdictional storage. Second, cloud providers' terms of service regarding data access by the provider, its subprocessors, and by foreign governments under foreign legislation can be material. Third, for agencies with strong obligations to iwi, hapū, and Māori communities, the question of where data about those communities resides has cultural and political dimensions beyond the technical.
Tahua is hosted on AWS ap-southeast-2 — the Sydney region. Data stays within Aotearoa's region. This is not merely a marketing statement; it is a procurement-relevant fact that can be specified in a contract. For NZ government agencies evaluating grants software, it removes a category of risk that would otherwise require mitigation.
For agencies with information security requirements aligned to NZISM (the New Zealand Information Security Manual), the relevant question is whether the platform's security controls are aligned to that framework. Tahua is NZISM-aligned and has AES-256 encryption at rest and in transit, with SOC 2 Type II certification in progress.
When evaluating grants management software for a government or Crown entity context, the following questions cut through vendor positioning:
Audit trail. Can you show me every action taken on a specific application — including who, what, and when — going back to submission? Is this view available to programme staff without IT involvement?
COI management. When an assessor declares a conflict, does the system automatically restrict their access to that application's data, or does a programme manager have to manually change permissions? Is the declaration and the access restriction both recorded?
Decision documentation. Are decision letters stored against the application record? Can you trace from a funded grant back through the assessment record to the specific scores and criteria that drove the decision?
Reporting. Can programme staff produce cross-programme, cross-round summary reports without vendor support? What does a ministerial briefing data export look like, and how long does it take?
Data sovereignty. Where is data hosted? Can you provide contractual confirmation of the jurisdiction? What are the provider's obligations under the hosting country's legislation regarding government access to customer data?
Security. What certifications does the platform hold? Is it aligned to NZISM? What is the data encryption standard at rest and in transit?
A vendor who answers these questions with specifics — not generalities — has designed their system with government requirements in mind. A vendor who answers them with reassurances rather than facts has not.
For more on how Tahua is designed for government accountability, see our government grants management solution page.
Government procurement conversations are welcome at Tahua. We have security documentation, audit trail demonstrations, and COI workflow walkthroughs ready for your risk and legal teams. Book a conversation.