What Grantmakers Get Wrong About Applicant Experience (and Why It Costs Them)

Grantmakers spend considerable time improving their assessment process. Few spend equivalent time on their intake design. The problem usually starts before the application is submitted.

When the quality of applications is low, the instinct is to fix the assessment. Better rubrics, clearer scoring, more experienced assessors. These improvements matter — but they are interventions on the wrong stage of the pipeline. By the time an application reaches assessment, the damage is already done. The community organisation that could not work out whether it was eligible never applied. The applicant who lost an hour's work because the portal timed out did not try again. The well-qualified organisation that found the form so daunting it submitted something incomplete was scored accordingly.

The quality of what arrives in your inbox is determined in large part by how you designed the intake. Confusing forms, excessive questions, PDF attachments instead of structured fields, and opaque eligibility criteria are not the applicant's failures. They are design failures. And the cost of those failures falls primarily on the applicants who could least afford it: smaller community organisations, first-time applicants, and groups with limited administrative capacity.


The quality problem starts before an application is submitted

Think about what a first-time applicant sees when they encounter your grant programme. They find information about the grant — if the information is findable, which is not guaranteed. They read the eligibility criteria — if the criteria are written in plain language, which is not guaranteed. They decide whether to apply. They begin an application.

At each of these points, decisions are being made that affect what arrives in your inbox. The organisation that reads your eligibility criteria and is not sure whether it qualifies will often do one of three things: apply anyway and waste your staff's time processing an ineligible application; ask a question that takes your staff time to answer; or not apply, which loses you a potentially strong application from a self-selecting candidate who erred on the side of caution.

None of these outcomes is what you want. All of them are consequences of eligibility criteria that were not written clearly enough or surfaced early enough in the process.

The same logic applies throughout the form. Every question that applicants struggle to interpret, every section that requires context they do not have, every attachment requirement that demands a format they do not use — each of these increases the cognitive load on applicants and decreases the signal-to-noise ratio in what you receive. You end up with long applications that are hard to assess, not because applicants are poor writers, but because the form was not designed to elicit structured, comparable responses.


What high drop-off rates in grant portals are telling you

Most grants management portals have data on drop-off: where in the application flow applicants start but do not finish. Most programme managers never look at it.

When applicants consistently abandon applications at a particular point in a form, that is a signal. It might mean the question at that point is confusing. It might mean the section requires preparation the applicant had not anticipated. It might mean the form is simply too long and the applicant ran out of time or motivation. All of these are design problems, not applicant problems.

A high overall drop-off rate from a grant portal — particularly when measured against the number of applicants who began a form — is not evidence that you attracted underqualified applicants. It is evidence that your form has a friction problem. Fixing that problem is cheaper than the staff time you spend processing incomplete submissions or the programme quality you lose when stronger organisations decide not to bother.

The applicants most likely to persist through a difficult form are not necessarily the applicants with the strongest proposals. They are the applicants with the most administrative capacity: larger organisations, organisations with dedicated grants staff, organisations that have applied to your programme before. If your form is optimised for the experienced applicant, you are systematically disadvantaging the first-timer.


The five most common form design mistakes (and what to do instead)

1. Asking for information you do not use in assessment. Every question on a grant form should map to a criterion or a process requirement. If you ask for three years of financial accounts but assessors only look at the most recent year, you are adding burden without adding value. Audit your form against your assessment criteria annually. Remove or make optional anything that is not used.

2. Using free-text fields where structured fields would work better. "Describe your organisation" produces 500-word essays that vary wildly in what they cover. "Number of staff and volunteers," "Year established," "Primary geographic area" — structured fields produce comparable data that is faster to assess and easier to analyse across the pool. Use free text for qualitative questions that genuinely require narrative.

3. Making the form a single-page wall of questions. Multi-page forms with clear section headings reduce cognitive load and help applicants pace their work. A form that presents 30 questions on a single page looks harder than the same questions presented across five sections, even when the total work is identical. Visible progress also matters: applicants who can see they are 60% through are more likely to complete than applicants who have no sense of where they are.

4. Requiring PDF attachments for information that could be captured in the form. PDFs do not search. They do not aggregate. They require assessors to open a separate file, find the relevant section, and manually transfer information to a scoring form. Where possible, ask for the information directly in the form as structured fields. Reserve attachments for documents that are genuinely not capturable in structured fields: audited financials, letters of support, evidence of legal status.

5. Not saving progress automatically. An applicant who works on a form for 45 minutes, loses their session, and finds their work gone will not return. This is not a minor inconvenience — it is a programme quality problem. Automatic draft saving is a baseline expectation for any form applicants will invest significant time completing.


Eligibility screening: filtering before you intake

Eligibility screening — presenting clear criteria and allowing applicants to self-check before they start — is one of the most effective investments you can make in application quality.

The goal is not to deter applicants. It is to ensure that applicants have enough information to make an informed decision before they invest time in a form. An eligibility check that takes two minutes and tells an applicant they do not qualify saves both parties considerable effort. An eligibility check that takes two minutes and confirms an applicant qualifies gives them confidence that their time is well spent.

Effective eligibility screening is not a paragraph of criteria in the programme guidelines. It is a structured set of conditions — presented before the form — that an applicant can work through systematically. Conditions should be written in plain language, as specific as your programme allows, and presented in order of most disqualifying first.

When eligibility screening is integrated into the intake flow rather than published separately as a PDF, it can also reduce the volume of eligibility queries to your programme team — and eliminate the awkward situation of informing an applicant their submission is ineligible after they have completed a lengthy form.


Why asking for everything upfront is a self-defeating strategy

The instinct to front-load grant applications is understandable. Programme managers have been surprised before by applications that passed initial review and then failed due diligence. The response is to move more of that due diligence earlier, which produces longer, more demanding initial applications.

The cost of this is rarely measured. It includes: first-time applicants who decide the time investment is not worth the uncertainty; community organisations without dedicated grants staff who simply cannot complete a 40-question form; and the assessor time required to process a detailed submission from an application that turns out to be ineligible in the first round anyway.

A staged intake model asks less upfront and more later. Stage one: eligibility and key criteria, enough to make a shortlisting decision. Stage two: detailed programme information, provided only by shortlisted applicants. Stage three: due diligence, required only for applicants at the decision stage.

This model reduces burden for the majority of applicants, who do not progress past stage one. It concentrates the detailed information-gathering where it adds the most value. And it signals to applicants that your programme respects their time — which is itself a quality signal about your organisation.


What "applicant-friendly" looks like in practice for community organisations

For community organisations — particularly small groups, hapū, rural organisations, and first-time applicants — an applicant-friendly intake has a few specific characteristics.

It is accessible from a mobile device. A significant proportion of community-sector applicants will attempt to complete forms on phones or tablets, particularly in the early stages when they are gathering information. A form that requires a desktop browser to navigate effectively excludes a disproportionate share of smaller organisations.

It saves progress automatically. The ability to begin an application, save a draft, return to it over several days, and submit when it is ready is not a luxury. It is a practical requirement for organisations where grant applications are completed by a volunteer or a part-time administrator working around other commitments.

It communicates clearly at every stage. Acknowledgement of receipt, notification of assessment outcomes, clear feedback on unsuccessful applications — these are not just good practice, they are part of the programme's relationship with its applicant community. Organisations that receive clear, specific feedback on an unsuccessful application are more likely to apply again, and more likely to apply better.

It reflects your programme's identity. A branded portal that presents your programme professionally and consistently does not merely improve aesthetics. It signals to applicants that you have invested in making the process good for them — which affects how seriously they take the application and how much effort they invest in making it strong.


The best applications come from applicants who understood what you were asking for, had the confidence to apply, and had the tools to complete the form properly. That is a design outcome. See the applicant portal experience from both sides — book a walkthrough.