Most software evaluations in the grants sector follow the same pattern. A programme manager receives a vendor demo. The demo showcases a clean intake form, a tidy dashboard, and a screenshot of a report. The features list is compared against a checklist someone put together from another sector's RFP. A price is negotiated. The software is purchased.
Six months later, the same programme manager is running a parallel spreadsheet to track the things the software doesn't quite do.
This guide is an attempt to short-circuit that process. It is written for grants managers, programme officers, and operations leads who are responsible for choosing a platform — not the IT team, not procurement, but the person who will actually use it and defend it to their board.
The argument is straightforward: most software comparisons miss the point because they compare features rather than workflows. A workflow is a sequence of steps, approvals, and records that your actual programme requires. A feature is a capability that may or may not fit into that sequence. The difference matters enormously.
Features lists are the currency of software marketing because they are easy to compare and difficult to disprove. Every platform has a feature called "milestone tracking." Every platform has "reporting." Every platform claims to "integrate with Xero." None of those claims is necessarily false. What they obscure is the depth, the usability, and the fit with your specific operational context.
The relevant test is not "does the software have a milestone tracking feature?" It is "can your programme officer, six months from now, open the system and see at a glance which of your 40 active grantees has a milestone due this week, what evidence was requested, and whether the previous tranche payment cleared?" Those are two very different questions. The first is answerable from a features list. The second can only be answered by someone who has used the system in a real funding round.
The second problem with features-led comparisons is that they overweight the assessment and intake stage and underweight post-award. Assessment is visible, defined, and time-bounded. Post-award is ongoing, less visible, and — for most funders — where most of the administrative complexity actually lives. If your evaluation is driven by the quality of the application form builder, you may select a platform that is excellent at intake and inadequate at everything that follows.
Before you look at a single demo, answer five questions about your own programme.
1. What is your grant volume, and is it likely to grow?
Volume affects almost every aspect of software selection. An organisation processing 20 grants a year has different requirements from one processing 200. More importantly, an organisation whose volume is growing should evaluate for where it will be in three years, not where it is today. The worst outcome is selecting a platform that fits your current size and discovering its limits as your programme expands.
2. What type of programme are you running?
Contestable open rounds, invited applications, quick-response funds, and multi-year strategic investments all have different workflow requirements. A platform well-suited to high-volume open rounds may be poorly configured for the relationship-based workflow of an invited programme. Community foundations managing donor-advised funds have requirements that differ again from government agencies running output-based funding. Be clear about your programme type before evaluating.
3. How complex is your assessment process?
Assessment complexity includes: the number of criteria, whether scoring is weighted, whether you use independent scoring followed by panel deliberation, whether you have conflict of interest requirements, and whether assessors are internal staff, external experts, or a mixture. Some platforms support a structured panel workflow natively. Others treat the assessment stage as a form-collection problem and leave workflow design to the administrator. Know which you need.
4. What are your audit and probity requirements?
This question is discussed in more detail below, but it belongs in your pre-evaluation thinking. Government agencies, Crown entities, local councils, and publicly accountable funders have accountability requirements that most commercial CRM-based platforms are not designed to satisfy. If you are subject to OIA requests, ministerial oversight, or annual audit, your software needs to produce records that can withstand external scrutiny — not just internal reporting.
5. What integration does your organisation actually need?
Integration is the most overclaimed capability in grants software. Almost every platform claims to integrate with Xero, or with Salesforce, or with your HR system. The question is not whether an integration exists but what it actually does. Before evaluating integrations, write down the specific data flows your organisation requires: what information moves between which systems, in which direction, triggered by which events. Then test those specific flows in a demo, not just the existence of an integration badge on the vendor's website.
Publicly accountable funders — government agencies, councils, Crown-funded bodies, statutory entities — have accountability requirements that go beyond basic record-keeping. They need to demonstrate, to an auditor's standard, that funding decisions were made on stated criteria, by authorised decision-makers, with appropriate conflict of interest management, and that the process was documented throughout.
This has specific software implications.
Decision records. Every significant decision in the funding lifecycle — application received, assessed, recommended, approved, declined, milestone approved, payment released — should generate a timestamped, attributed record. The record should be immutable: it should not be possible for an administrator to edit a past decision after the fact without that edit itself being recorded.
Conflict of interest management. COI declarations should be captured within the system, not in a separate form returned by email. When an assessor declares a conflict, the system should prevent them from accessing or scoring the affected application — not rely on a manual process to enforce that exclusion. For funded programmes under Ministerial direction or public scrutiny, the ability to demonstrate that COI was managed programmatically rather than informally is a material difference.
Delegation documentation. Who is authorised to approve what? The delegations schedule that governs your funding programme should be reflected in the system's approval workflows, not just documented in a policy. If your delegations require two-person approval above a certain threshold, the system should enforce that, not assume it will be observed manually.
Reporting on demand. When an OIA request arrives, or an auditor asks for a summary of all grants approved in a financial year with the assessor scores and the basis for each decision, you need to be able to produce that from the system within an hour, not over several days of manual data assembly. That requires structured data capture throughout, not just at intake.
If a vendor cannot demonstrate each of these capabilities in a working demo — not in a roadmap, not in a slide — probity requirements should be a disqualifying gap for government-facing funders.
Assessment is where most platforms focus their development effort and their demo time, so it is worth being precise about what good looks like rather than being dazzled by a clean interface.
A sound assessment workflow separates three distinct phases: individual scoring, panel review, and recommendation. These phases have different participants, different rules, and different records. In the scoring phase, assessors work independently, without sight of each other's scores. In the panel review phase, scores are visible and deliberation occurs. In the recommendation phase, a final ranking and funding recommendation is produced, with a documented basis.
Platforms that collapse these phases — allowing assessors to see scores while still scoring, or allowing panel deliberations to happen without a separate record — produce assessment processes that are harder to defend and more vulnerable to social influence effects.
Specific capabilities to probe in a demo: Can you configure weighted criteria? Can the system enforce score completion before an assessor can see other scores? Does the system produce a panel summary report that shows individual scores, variance, and the basis for any score overrides? Can you manage the COI process — declaration, exclusion, and record — within the system?
"We integrate with Xero" can mean at least three different things: a CSV export formatted for Xero upload, an API connection that creates records in Xero from inside the grants system, or a bidirectional sync that updates grant records when payments are reconciled in Xero.
For most grant payment workflows, the CSV export is not adequate. It requires a manual step (someone runs the export, uploads it, checks for errors), which means the integration is only as reliable as the person performing it. It also creates a point of failure for audit purposes: the export is an act; if it is not performed, or is performed incorrectly, the records diverge.
A genuine Xero integration creates a payable record in Xero automatically when a milestone is approved in the grants system. Finance sees the bill in Xero, with the grantee details, payment amount, and grant reference pre-populated. They approve it in Xero, and the payment goes out. The reconciliation record links back to the milestone approval in the grants system. No CSV. No manual handoff. No possibility of the records drifting apart.
When evaluating any integration claim: ask to see it demonstrated in a live environment, ask who initiates the data transfer and whether it requires a manual step, and ask what happens to records in both systems when a payment is reversed or amended. The answers will tell you whether you have an integration or a workaround dressed as one.
The software is not the hard part. Selecting it takes weeks. Configuring it takes weeks. Getting your team off spreadsheets takes months.
This is the aspect of software procurement that vendors have the least incentive to be honest about, and it is the aspect that most frequently determines whether an implementation succeeds. A platform that is technically well-suited to your needs but poorly implemented — with staff who don't trust it, parallel spreadsheets running alongside it, and an administrator who is the only person who knows how it works — delivers a fraction of its potential value.
Before signing a contract, have a specific conversation with your vendor about implementation support: what is included, what is not, and what the typical implementation timeline is for an organisation of your size. A realistic timeline for a mid-sized funder implementing for the first time is six to twelve weeks from contract to first live round — not because the configuration is complex, but because data migration, staff training, and process documentation take time and require your people's attention alongside their ordinary workload.
The question to ask yourself is not "can we implement this software?" but "do we have a named person who will own this implementation, and do they have the capacity to give it their attention for the next eight weeks?" If the answer is no, either build that capacity before you start or select a simpler platform with a lighter implementation footprint.
The demo runs in a demo environment that looks nothing like your data. A demo environment populated with tidy, curated data tells you how the software looks, not how it behaves under real conditions. Ask to see the platform with a volume and complexity representative of your programme.
Post-award is covered in one slide. If the demo spends 40 minutes on intake and assessment and five minutes on milestone tracking, payment management, and accountability reporting, that distribution reflects the product's maturity. Post-award is not a secondary feature; for most funded programmes, it is the majority of the operational lifecycle.
The answer to every capability question is "yes, we can configure that." Configuration is not capability. A platform that requires custom development or complex configuration to support your standard workflow is not designed for your context; it is being retrofitted into it. Ask to see the configuration demonstrated, not promised.
The platform is a generic CRM with a grants layer on top. Salesforce-based and HubSpot-based grants solutions exist, and some organisations use them successfully. But they are built for a different primary use case — sales and customer relationship management — and they carry the overhead and pricing model of that heritage. For a funder whose primary accountability is to grantees and funders (not customers and clients), the conceptual mismatch matters. Ask about the platform's origin and primary sector.
The pricing model rewards complexity. Platforms that charge by user, by application, or by feature tier create incentives for the vendor to expand the footprint and for the buyer to restrict usage. A grants management platform should be priced in a way that encourages your whole team to use it, not one that makes you think twice about adding another programme officer to the system.
Use this checklist to structure your evaluation across vendors:
Programme fit
- [ ] Does the platform support your specific programme type (open rounds, invited, multi-year, quick-response)?
- [ ] Does it scale to your anticipated volume in three years?
- [ ] Is it used by organisations with comparable accountability requirements?
Assessment workflow
- [ ] Can scoring be configured as weighted criteria with anchor descriptions?
- [ ] Does the system enforce independent scoring before scores are visible?
- [ ] Is COI management built into the system (not just a declaration form)?
- [ ] Does the panel workflow produce a documented recommendation with an audit trail?
Post-award and milestone management
- [ ] Are milestones tracked natively, within the same system as assessment?
- [ ] Can payment release be gated on milestone approval?
- [ ] Is there a structured evidence review workflow with named approvers?
- [ ] Can you see, at a glance, all overdue milestones across your active portfolio?
Audit and probity
- [ ] Are all significant decisions timestamped and attributed in an immutable record?
- [ ] Can the system produce an accountability report (all decisions, basis, approvers) on demand?
- [ ] Are delegations enforced structurally, not just documented in policy?
Integration
- [ ] What specifically happens in your finance system when a milestone is approved?
- [ ] Is the integration bidirectional, or one-way?
- [ ] Does it require a manual step to initiate?
Implementation
- [ ] What is the typical implementation timeline for an organisation of your size?
- [ ] What implementation support is included in the contract?
- [ ] Are there reference customers you can speak to directly?
Pricing
- [ ] Is the pricing model per user, per application, or flat?
- [ ] Are there costs for adding additional programmes or rounds?
- [ ] What does the renewal price look like after year one?
Selecting grants management software is a consequential decision. The platform you choose will shape how your team works, how your programme is documented, and how your accountability story holds up when it is tested. The evaluation framework above is designed to ensure you are asking the questions that matter before you are committed to a contract.
If you would like to see Tahua demonstrated against your specific programme requirements, book a 30-minute conversation. We will show you the full lifecycle — from intake through post-award — using scenarios that reflect your context, not a generic demo script.