The Complete Guide to Grants Management: From Application to Final Report

Grants management is not a form-collection problem. That is the most common and most consequential misunderstanding in the field. Organisations that have spent years asking "how do we build a better application form?" are often the same organisations struggling with underpaid contracts, missed milestone reporting, payment bottlenecks, and accountability gaps that only become visible at audit time.

The form is the front door. The programme is everything that happens after someone walks through it.

This guide covers the full grants lifecycle — from programme design and round configuration through to final accountability reporting — for funders who want to understand what professional grants management actually requires before choosing a system, redesigning a process, or aligning a team on how the work should flow.

What grants management actually covers (the full lifecycle)

The grants lifecycle has eight distinct phases, each with its own operational requirements, compliance obligations, and data needs. Most organisations have clear visibility of the first two or three phases (application intake and assessment) and progressively less visibility of the later phases (contracts, milestones, payments, accountability). That distribution of attention is roughly the inverse of where the financial risk lives.

The eight phases are:

  1. Programme design and round configuration
  2. Application intake and eligibility screening
  3. Assessment and panel review
  4. Decisions and offer letters
  5. Contracts and grant agreements
  6. Milestone tracking and progress reporting
  7. Payments and financial reconciliation
  8. Final accountability and reporting

An audit trail runs through every phase. That is not a separate system — it is the continuous record of who did what, when, and why, from the moment a round is configured to the moment the final report is signed off. For organisations with parliamentary reporting obligations, OIA exposure, or donor accountability requirements, the audit trail is not optional. It is the programme record.

Phase 1: Programme design and round configuration

Before any application is accepted, the programme itself needs to be defined and configured. This phase is often treated as administrative preparation rather than substantive grants management — but decisions made here constrain or enable everything that follows.

Round configuration includes: setting the total funding envelope and any per-applicant caps or floors; defining eligibility criteria; designing the application form and its questions; setting the assessment criteria and their weights; configuring the panel (assessors, convenor, roles, permissions); and establishing the timeline including opening, closing, assessment, decision, and notification dates.

The significance of getting this phase right is that changes mid-round are expensive. Eligibility criteria that turn out to be ambiguous generate support queries and contested decisions. Assessment criteria that were not clearly defined before scoring began produce panel disagreements and inconsistent scores. Application questions that collect the wrong data cannot be corrected once applications are in.

In a purpose-built grants system, round configuration is done in a structured environment that enforces completeness before the round goes live. A drag-and-drop form builder allows the programme manager to design the application form and preview it from the applicant's perspective. Weighted scoring rubrics are set before the assessment phase opens. Panel members are assigned roles with appropriate permissions. The round configuration becomes part of the programme record and is visible to auditors.

Phase 2: Application intake and eligibility screening

Application intake is the most visible phase of the grants lifecycle and, as a consequence, the most over-engineered. Funders spend significant time optimising the applicant experience — portal branding, form design, auto-save functionality — and this is appropriate. But intake design also has a strategic function that is often underused: eligibility screening.

Eligibility screening that happens before an ineligible applicant completes a full application saves time for both the applicant and the assessor. A well-designed eligibility gate asks the three or four questions that most reliably identify ineligible applications — organisation type, geographic coverage, funding request range, project timing — and stops the application at that point with a clear explanation. This is not a barrier to access; it is a service to applicants who would otherwise invest significant time in an application that cannot succeed.

For funders receiving large application volumes, pre-intake screening also reduces assessment workload substantially. A round that receives 200 applications but screens 40 ineligible ones before they enter the assessment queue has reduced assessor workload by 20% before a single substantive review has begun.

Intake also needs to handle: draft saving (applicants frequently need multiple sessions to complete an application); file attachments and supporting documentation; supporting organisation information for multi-part applications; and submission confirmation and receipt communication.

At Acorn Foundation, which manages more than 500 donor-advised funds and distributes $5.1 million annually across a broad range of community grants, streamlined intake and eligibility screening was part of the transition that allowed the team to achieve a 60% reduction in administration time. The intake process needs to work for the applicant and for the programme.

Phase 3: Assessment and panel review

Assessment is where most of the human judgement in the grants process is concentrated, which makes it both the most intellectually complex phase and the most probity-sensitive.

A well-configured assessment phase has three components: a scoring instrument (rubric), a process for managing conflict of interest, and a deliberation structure.

The scoring instrument. Weighted scoring rubrics make assessment criteria explicit, comparable, and defensible. A rubric that assigns specific weights to each criterion — Innovation 40%, Methodology 30%, Team 30%, for example — produces scores that can be compared across assessors and across applications. The quality of a rubric is determined not by whether it has criteria but by whether the criteria are specific enough that two assessors reviewing the same application would arrive at scores within a meaningful range of each other. Criteria that are too broad produce scores that reflect the assessor's general impression rather than the application's actual quality against each dimension.

Conflict of interest management. Every assessor in every round should declare their conflicts against every application they are assigned to review. That declaration needs to be captured in a structured way, and the consequence of a declaration — exclusion from scoring, exclusion from discussion, or exclusion from the panel — needs to be enforced systematically, not managed by instruction. System-level COI enforcement means that when an assessor declares a conflict, their access to that application is automatically revoked. The declaration and the restriction are both logged. See our article on managing conflict of interest in grant assessments for a detailed treatment of this topic.

Blind review. For some funding programmes, particularly in the research sector, anonymising applicant identity during the scoring phase reduces identity-based bias. When blind review is configured, assessors see the application content without the applicant's name, organisation, or other identifying details. The programme administrator retains full-details access. Blind review is a targeted tool, not a universal improvement — it addresses identity bias but does not address funding history bias or panel composition problems.

Deliberation structure. Panel discussions should produce documented outcomes: the final ranking, the reasoning for key decisions, and a record of how any disputes between assessors were resolved. That record is both a quality tool (it surfaces weak reasoning at the panel stage rather than at the complaints stage) and a compliance tool.

Phase 4: Decisions and offer letters

The decision phase is often where grants administration tools show their age most clearly. Programmes that have a sophisticated application intake and a well-structured assessment process sometimes produce decision letters that are manually drafted, individually formatted, and sent outside the grants system. That breaks the programme record and creates a compliance gap.

A well-managed decision phase uses the assessment outcome to drive automated decision communications. Offer letters and decline letters should be generated from templates with merge fields that pull applicant name, organisation, funding amount, and round-specific conditions from the application record. They should be sent from within the system so that the communication is logged against the application record and timestamped.

Offer letters for approved applications need to include the conditions of the grant, the indicative payment schedule, and instructions for accepting the offer or requesting a variation. The acceptance mechanism itself — whether digital signature or formal reply — should be logged.

For public sector funders with transparency obligations, the decision phase is also when the public register is updated. Taupō District Council, for example, publishes a public register of community grants decisions. That register needs to be accurate, timely, and consistent with the internal decision record. Maintaining those two records separately is an ongoing source of error; publishing from the system record eliminates it.

Phase 5: Contracts and grant agreements

The grant agreement is the legal foundation of the post-award relationship. It sets out what the grantee has agreed to do, by when, for how much, and under what conditions. Every subsequent phase of the grants lifecycle — milestones, payments, reporting, accountability — is governed by what the contract says.

Contract management in grants has a specific complexity: the contract often needs to reference programme-specific terms that vary by round, by applicant type, or by funding stream. A research grant agreement looks different from a community grants agreement, which looks different from a capital works grant. A system that forces all grant agreements into a single template either produces awkward agreements or requires manual editing that breaks the audit trail.

Purpose-built grants systems support contract template libraries with configurable merge fields and conditional clauses. When an offer is accepted, the contract is generated from the relevant template, pre-populated with the applicant's details, the approved funding amount, and the agreed schedule of deliverables. The executed contract is stored against the application record and linked to the milestone and payment schedule.

For NZ On Air, which operates under parliamentary reporting obligations and manages a complex portfolio of broadcast funding agreements, the integrity of the contract record and its connection to the rest of the programme lifecycle is not a nice-to-have. It is a core compliance requirement.

Phase 6: Milestone tracking and progress reporting

Post-award management is where most of the programme risk lives and where most under-resourced grants teams struggle most. A funder that has approved 80 grants in a single round now has 80 active relationships to manage, each with its own milestone schedule, reporting requirements, and compliance obligations.

Milestone tracking requires:

  • A clear schedule of what deliverables are due when, linked to the grant agreement
  • A mechanism for the grantee to submit milestone evidence (progress reports, financial acquittals, project outputs)
  • A review workflow for the programme manager or assessor to confirm the milestone has been met
  • A record of any milestone variations requested and approved
  • An exception view that surfaces overdue or at-risk milestones for programme manager attention

Without a system that manages this structure automatically, milestone tracking defaults to a combination of spreadsheets, email reminders, and manual chasing — an approach that scales badly and creates compliance gaps as programme volume grows.

Te Māngai Pāho, which administers more than $60 million in annual funding across broadcast and digital media content funding, has more than doubled both the number of funding rounds and the number of contracts under management in recent years with the same team. As CEO Larry Parr notes: "Our small team has more than doubled both the number of Funding Rounds and the number of contracts under management in the last two years, thanks to Tahua." That scale of growth is not achievable with a manual milestone tracking process.

Phase 7: Payments and financial reconciliation

The payment phase is where the financial risk in grants management is most concentrated and where integration between the grants system and the organisation's financial systems is most valuable.

A payment should not be triggered until a milestone has been confirmed as met. That sounds obvious, but in practice, many organisations have payment processes that run in parallel to (or ahead of) their milestone confirmation processes, with reconciliation happening after the fact. The result is payments made to grantees who have not yet submitted required evidence, or milestone evidence reviewed after the payment has already been sent.

In a well-designed grants management system, payment approval is triggered by milestone sign-off, not by calendar schedule. The milestone review workflow ends with either approval (which triggers payment initiation) or a request for further information (which holds the payment until the matter is resolved). That sequence is enforced by the system, not managed by individual discipline.

For organisations using Xero as their accounting system, Tahua's Xero integration means that when a milestone is approved, a bill is automatically created in Xero. This eliminates the manual re-entry step between the grants system and the accounting system, removes a category of data-entry error, and ensures the financial record in Xero is always consistent with the programme record in the grants system.

Acorn Foundation processes payments to grantees in one day, down from one week previously — a change driven by the automation of the payment workflow and its integration with their accounting processes.

Phase 8: Final accountability and reporting

The final accountability phase closes the grant. It confirms that the grantee has delivered on all contract obligations, submitted all required reports, and that the funder has reviewed and accepted the final acquittal.

Final accountability reporting often includes both qualitative and financial components. The qualitative component confirms that the funded activity was completed as described in the grant agreement. The financial component confirms that the grant funds were spent as approved. Both components need to be captured against the application record so that the complete file — from application through to final acquittal — is available in one place.

For funders with parliamentary reporting obligations, the final accountability records for each grant are the underlying evidence for the aggregate reports they must produce. Those reports require accurate data on how many grants were made, to whom, for how much, and what outcomes were achieved. If that data is in the grants system, producing the report is a query. If it is in email threads and filing cabinets, it is a significant research project.

The final report is also the point at which lessons from the grant feed back into programme design. Did the milestone schedule prove realistic? Did the grantee encounter systemic problems that might be addressed in future round design? Were there patterns across the round's final reports that suggest changes to eligibility criteria or application form design? Programme learning requires programme data, and programme data requires a grants system that captures structured information throughout the lifecycle.

The audit trail: why it runs through every phase

The audit trail is not a phase — it is the continuous record that runs through every phase from configuration to close. In a properly designed grants system, every substantive action in the system is logged: who created and configured the round, who submitted each application, who was assigned to assess it, when they declared any conflicts, what scores they submitted, who approved the milestone, when the payment was triggered, and when the final report was accepted.

That log is the programme record. It is what an OIA response draws on. It is what an external audit examines. It is what the convenor refers to when a decision is challenged. It is what the Parliamentary audit examines when an agency's grant-making is under scrutiny.

The difference between a government-grade audit trail and a basic activity log is specificity and immutability. A government-grade audit trail logs the specific action taken, the specific actor who took it, the specific record affected, and the exact timestamp. It cannot be edited or deleted after the fact. It is searchable and exportable. For a detailed treatment of what government-grade audit trail requirements actually mean in practice, see our article on audit trails in grants management.

Tahua is hosted on AWS ap-southeast-2 (Sydney), which means data stays within the Aotearoa/Australia region. All data is encrypted at rest using AES-256. The platform is aligned with the New Zealand Information Security Manual (NZISM), which sets the security standard for government agencies operating digital systems in New Zealand.

Choosing the right system for your organisation's lifecycle

The central question when evaluating a grants management system is not "does it have a good application form builder?" It is "does it cover the full lifecycle, and does it cover the phases where my organisation is currently weakest?"

Most organisations have reasonable visibility of their intake phase because that is where applicants interact with the system and where problems are immediately visible. The later phases — contracts, milestones, payments, accountability — are often managed through a combination of spreadsheets, email, and separate accounting systems, with the grants system functioning primarily as an application repository rather than a programme management tool.

The cost of that fragmentation is not always immediately apparent. It shows up in milestone evidence that cannot be found when a payment is queried, in payment records that do not match the grants system, in final report data that cannot be aggregated for parliamentary reporting, and in audit findings about documentation gaps in the post-award process.

The 50+ funders across New Zealand and Australia using Tahua collectively administer more than $1 billion NZD through the platform and have processed more than 15,000 grant applications. The platform holds a 5.0/5.0 rating on Capterra. Clients range from large Crown entities with parliamentary accountability obligations to community foundations managing donor-advised fund distributions to district councils publishing public registers of community grants decisions.

Implementation typically takes six to eight weeks, using a train-the-trainer model that builds internal capability rather than creating ongoing system dependency.


For funders operating in a government or Crown entity context, the government grants management page covers the specific requirements of public sector grants administration and how Tahua meets them. To see the full lifecycle in your programme context, book a demo.