The quality of what arrives in your assessment queue is determined by how you designed the intake. Confusing forms, excessive questions, and opaque eligibility criteria are not the applicant's failures — they are design failures. And they compound: incomplete submissions create administrative overhead, unusable answers frustrate assessors, and ambiguous eligibility generates contested decisions.
Most grants teams know this at some level. The practical problem is that form design is often done under time pressure, by people who are expert in the programme but not in form design, and with tools that do not make it easy to test the form before it goes live. The result is forms that reflect how the programme team thinks about the programme rather than what applicants need to understand to complete it successfully.
This guide covers the principles for building application forms that produce structured, assessable data — and the common mistakes that most forms make and how to fix them.
The standard account of what makes a grant application form bad is that it asks too many questions, uses jargon, or is technically difficult to complete. These are real problems. But they are symptoms of a more fundamental issue: most application forms are designed for the programme team's needs without being designed for the data structure that assessment requires.
Consider what an assessor actually needs from an application. They need to evaluate it against a set of criteria. Those criteria are typically things like: the quality of the proposed activity, the applicant's capacity to deliver it, the expected outcomes, and the budget's reasonableness. For an assessor to evaluate these criteria reliably, the application needs to provide structured, comparable data for each dimension — the same kind of information, in the same form, from every applicant.
Open-text fields that ask "tell us about your project" do not produce structured data. They produce prose of widely varying length, focus, and quality. Some applicants will write 800 words about their project's background and spend three sentences on the plan. Others will skip the background entirely and go straight to a detailed implementation timeline. Both are responding to the same question, but the answers are not comparable — and the assessor has to extract the relevant information from wherever it happens to be in the response.
Forms designed as documents — a series of open-text fields that together tell a story — are intuitive to fill in but difficult to assess consistently. Forms designed as data structures — specific questions for specific data points, with structured response formats — are harder to write for but produce outputs that can be assessed comparably across a large number of applications.
The goal is a form that serves both purposes: enough narrative space for applicants to make their case, with enough structure to give assessors comparable data across the criteria.
Eligibility screening is the most underused tool in application form design, and its absence is costly in several directions at once.
When ineligible applicants submit full applications, they have wasted their time on an application that cannot succeed. The programme team has to process those applications far enough to identify and communicate the ineligibility, which takes time. Those applications take up space in the assessment queue and create potential for confusion. And if eligibility criteria are applied inconsistently — which is more likely when ineligibility is identified late — the programme team has a complaints risk.
Eligibility screening that happens before the application is started prevents all of these problems. A well-designed eligibility gate asks three to five questions that most reliably identify ineligible applications:
If an applicant fails any of these criteria, they should be stopped immediately with a clear, specific explanation of why they are ineligible and, if appropriate, a pointer to a more suitable programme. That message should be specific — "This programme funds organisations incorporated in New Zealand. Your organisation appears to be incorporated elsewhere" — not generic.
Applicants who receive a specific, early eligibility message are far less likely to complain than applicants who complete a full application and are declined on eligibility grounds weeks later. Early screening is also better for the programme's reputation. It signals that the programme team has thought clearly about who the programme is for.
There is no universal answer to this question, but there is a useful test: for each question on your form, can you describe specifically how the answer will be used in assessment? If you cannot point to a specific assessment criterion that the question directly informs, the question probably should not be there.
The tendency to over-question is understandable. Programme teams want comprehensive information. They add questions "just in case" or because a previous round surfaced a gap in the data. But every additional question has a cost: it increases completion time, reduces the quality of responses to more important questions (applicant fatigue is real), and adds processing burden at the intake stage.
A practical approach to question auditing:
Community grants programmes typically need 12–20 questions. Research grants may need 25–35, because the methodology and scientific background require more structured detail. Capital works or infrastructure grants often need 15–25, with more emphasis on timeline and budget breakdown. These ranges are illustrative, not definitive. The right number of questions is the number required to produce the data your assessment process needs.
Open-text fields are appropriate when the content of the answer is inherently narrative and cannot be meaningfully structured — describing the problem the project addresses, explaining the methodology, making the case for the team's suitability.
Structured fields are appropriate when the answer is a specific data point: the funding amount requested, the start and end dates of the project, the number of beneficiaries expected, the organisation's annual revenue, the lead applicant's role or title.
The common design error is using open-text fields for questions that should produce structured data. "What is your budget for this project?" answered as a text field produces answers like "approximately $45,000" and "$40,000 to $50,000 depending on venue costs" and "see attached budget." None of these are equivalent, none are structured, and none can be processed automatically. A currency field with a simple validation produces a comparable, processable data point.
The inverse error also occurs: using structured fields (dropdown menus, radio buttons) for questions that require explanation. If you ask an applicant to select their organisation type from a dropdown, but their actual organisation type is not one of the options listed, they will either select the closest option (inaccurate) or select "other" and move on (unhelpful). Questions about organisation type, funding history, and partnerships often need both a structured element (select from a list) and a conditional text field (if "other," describe).
Conditional logic shows or hides questions based on the answers to previous questions. It is one of the most effective tools for reducing form length and improving completion rates, because it means applicants only see questions that are relevant to their specific application.
Examples of useful conditional logic:
Conditional logic requires investment in the form configuration stage — each conditional branch needs to be tested to confirm it shows the right questions and hides the right ones. But it pays back that investment in reduced applicant confusion, higher completion rates, and a cleaner data set at the intake stage.
In a form builder with native conditional logic support, branches are configured visually and can be previewed in real time. The form builder in Tahua's applicant portal supports multi-page forms with conditional logic, allowing programme managers to design complex conditional structures without coding.
Application completion rate is not just a user experience metric — it is a data quality metric. An application that is 60% complete when it expires or is abandoned is not a useful data point. It is a wasted opportunity for both the applicant and the programme.
Auto-save functionality — which saves application progress automatically as the applicant types, without requiring a manual save action — materially improves completion rates. Grant applications are not completed in a single session for most applicants. They gather supporting information, consult colleagues, revise their budget, and return to the form over days or sometimes weeks. If the form does not reliably save progress between sessions, applicants who encounter technical problems lose work. Some will restart; others will not.
Auto-save also reduces the support burden. A significant proportion of grants team support queries are about lost progress. "I was halfway through my application and the page timed out and I lost everything" is a common complaint in programmes that do not have auto-save. Eliminating that category of support query is worthwhile in itself.
For funders with applicants who are in areas with unreliable internet connectivity — including rural community grants programmes in New Zealand and Australia — auto-save that works across sessions rather than just within a session is particularly important. An applicant in a remote area who loses their session mid-way through should be able to pick up exactly where they left off from any device.
Forms should be tested by people who are not on the programme team before they go live. The programme team has too much context to see the form as a new applicant would. They know what each question means, they know what a good answer looks like, and they are unlikely to be confused by internal terminology that would stop a first-time applicant.
A useful testing protocol:
The things you will find in testing that you will not find by reading the form: ambiguous questions that produce different interpretations, questions that assume knowledge applicants do not have, instruction text that is in the wrong place or is missing entirely, and eligibility logic errors that let ineligible applicants through or incorrectly block eligible ones.
Testing takes a few hours. It is worth doing every time a form is substantially redesigned.
Asking for information that is already in the applicant's profile. If your grants portal collects organisation information at registration, do not ask for it again in the application form. Pull it from the profile. Asking applicants to re-enter information they have already provided signals that the system is not integrated and wastes their time.
Using undefined programme terminology. Terms like "funded activity," "beneficiaries," "outputs and outcomes," and "eligible expenditure" mean specific things to grants professionals and vague or different things to applicants. Every term that has a specific meaning for your programme should either be defined in the form instructions or replaced with plain language.
Setting a file size limit that fails at submission. Budget attachments, supporting documents, and letters of support often run to multi-megabyte PDFs. If your form accepts attachments but has a file size limit that is too low, applicants will discover this when they try to submit, not when they try to upload. The submission failure — after a multi-session completion process — generates support contacts and sometimes abandoned applications. Set file size limits that reflect realistic document sizes and state them clearly at the point where the attachment is requested.
Not telling applicants what happens after they submit. The submission confirmation message and email should tell applicants: that their submission has been received, their application reference number, the expected timeline for decisions, and who to contact if they have questions. A submission confirmation that says only "Thank you for your submission" leaves applicants uncertain about whether the system received their application correctly and with no way to follow up.
Making the form mobile-unfriendly. Multi-column layouts, small clickable targets, and date pickers designed for desktop use create significant friction for applicants on mobile devices. Responsive design for the applicant portal is not optional — particularly for community grants programmes where applicants may not have access to a desktop or laptop computer.
To see how Tahua's form builder handles conditional logic, eligibility screening, and auto-save in practice, book a demo.