Funders who review hundreds of applications per year see patterns that individual applicants do not. Most applications fail in predictable ways — not because the proposed work is poor, but because the application does not make a clear, evidenced case that the funder needs to see. Understanding these patterns helps applicants write more effectively and helps funders design applications that draw out better responses.
This article is written from the funder's perspective. It describes what strong applications do, based on what assessment panels find persuasive and what makes funded applications different from declined ones.
Before understanding what makes a strong application, it helps to understand what assessors are doing when they read one.
An assessor typically has a stack of applications to review, a scoring rubric, and a limited amount of time per application. They are not reading every word carefully — they are scanning for the information that will allow them to score each criterion. An application that makes the assessor hunt for the answer to "what outcomes will this achieve?" will score lower than one that answers the question clearly and early.
Assessors are also — even with structured rubrics — making a qualitative judgment: does this organisation know what it is doing? Does this application reflect genuine understanding of the problem it is trying to address? Is the approach credible?
Strong applications answer the criterion questions clearly and pass the credibility test. Weak applications require the assessor to work to find answers, or leave them unconvinced.
They state the problem precisely. The strongest applications do not start with the applicant organisation — they start with the problem. They describe who is affected, how many are affected, what the consequences are, and why the problem is not already being adequately addressed. A well-specified problem creates the context in which the proposed solution makes sense.
They propose a specific response. A vague response to a specific problem is a credibility failure. "We will work with young people in our community to improve their wellbeing" does not tell an assessor what the organisation will actually do, how many people it will reach, or why this approach is likely to work. "We will run a ten-week employment readiness programme for 30 young people aged 16–24 in our district, using the evidence-based [programme name] curriculum delivered by three trained facilitators" is specific enough to assess.
They connect the response to evidence. Why will this approach work? What evidence is there that similar approaches have produced the intended outcomes for similar populations? Assessors evaluating many competing proposals will weight evidence-based approaches more heavily than novel approaches with no evidence base. For well-evidenced approaches, citing the evidence (published evaluations, data from the applicant's own previous programmes) is more persuasive than simply claiming the programme is effective.
They demonstrate organisational credibility. The proposal may be excellent, but can this organisation deliver it? Track record matters: previous programmes delivered, accountability requirements met, organisational governance that works. An organisation applying to deliver a $200,000 programme for the first time, with no similar previous delivery, faces a credibility question that the application needs to address — not avoid.
They are honest about risk. Assessors who have read hundreds of applications are suspicious of applications that present no risk. Every programme has risks: the target population may be harder to reach than expected, key staff may leave, partner relationships may not proceed as planned. An application that acknowledges the main risks and describes how they will be managed is more credible than one that presents a seamless path to success.
They match the budget to the work. A budget that does not add up to the proposed activity, or that has unexplained lines, creates doubt. A well-constructed budget shows that the applicant understands what the work costs and has not simply requested the maximum amount available.
They describe the organisation, not the problem. Many applications lead with the applicant's history, values, and achievements. This information matters, but it is not what assessors are scoring. An application that spends its first two pages on the organisation's founding story and its achievements has used precious space on background rather than on the criterion questions.
They state outcomes without evidence. "This programme will improve the wellbeing of young people" is an assertion. "Our previous cohort showed a 45% improvement in school attendance after completing the programme" is evidence. Outcome statements without evidence are common in weak applications and easy to identify as such.
They ask for what they need, not what they propose. Applications that describe a broad programme and then ask for a specific line item ("we need $15,000 for staffing") without connecting the request to the specific activities the funder is being asked to fund create accountability questions. The funder is funding a programme, not a staffing line.
They don't answer the question asked. Application forms ask specific questions. Applicants who write their preferred narrative rather than answering the questions as asked make assessment harder and score lower — not because their proposal is poor, but because the assessor cannot find the answer in the criterion that the question is asking about.
They are longer than they need to be. Longer applications are not better applications. Applications that use the maximum word count for every question, regardless of whether all that space is needed, are harder to assess than well-edited applications that are concise and specific. A 200-word answer that precisely addresses the question is more persuasive than a 500-word answer that circles around it.
The quality of applications is partly a function of the application form. Forms that ask vague questions get vague answers. Forms that specify exactly what information is required — and why it is required — get better answers.
Best practice in application form design:
- Each question maps to a specific assessment criterion
- Questions are phrased to ask for specific, assessable information (not "describe your organisation's impact" but "describe one programme you have delivered in the last two years, the number of people it reached, and the outcomes you observed")
- Word limits are set at a level that allows a good answer but prevents padding
- Guidance notes explain what a strong answer looks like
- Document requirements specify exactly what is needed (not "recent financial information" but "most recent annual financial statements, no more than 18 months old")
For funders designing application forms and assessment frameworks that draw out high-quality responses, the government grants management and community foundations pages cover Tahua's configurable application and assessment tools. To discuss your application design.
**.