Grant Impact Measurement: Qualitative vs Quantitative Approaches

The debate about qualitative versus quantitative impact measurement in grants is usually framed as a methodological question. It's actually a design question: what kind of evidence do you need, for what purpose, and who needs to find it convincing?

Most grant programmes need both. The question is how to combine them without generating an unmanageable evidence burden.

What quantitative measurement does well

Quantitative data — numbers, counts, percentages, rates — is good at describing scale and change over time. It's easy to aggregate across grantees, easy to present in governance reports, and easy to compare across rounds.

For a grants programme, useful quantitative measures include:
- Number of people reached by funded activity
- Proportion of participants who report a specific change (measured by survey)
- Change in a specific rate or measure over time (hospital readmissions, employment uptake, test scores)
- Programme delivery metrics (workshops completed, services accessed, resources distributed)

Quantitative data becomes meaningful when it's tracked consistently over time and compared against a baseline or benchmark. A number in isolation — "we reached 3,000 people" — doesn't tell you whether that's good or not. The same number compared to last year, or to your target, or to similar programmes, tells you something.

What qualitative measurement does well

Qualitative data — interviews, case studies, participant narratives, observations — is good at explaining why something is or isn't working. It captures nuance, context, and the human experience of change that numbers can't convey.

For a grants programme, qualitative evidence is particularly useful for:
- Understanding the mechanism of change (why did this activity produce this outcome?)
- Capturing unexpected outcomes — positive or negative — that weren't anticipated in the programme design
- Illustrating the human significance of statistical findings
- Identifying what's different for particular subgroups of participants

A statistic that says "85% of participants reported improved confidence" is strengthened by a case study that shows what that confidence change looked like in practice — what the person did differently, what it meant for their family or community, what was hard about it.

The common failure mode: collecting one without the other

Programmes that collect only quantitative data end up with figures their board can report but no understanding of why outcomes are or aren't being achieved. When something isn't working, the numbers tell you it isn't working; they don't tell you what to change.

Programmes that collect only qualitative data end up with compelling stories but no way to assess whether the impact is widespread or isolated. A well-chosen case study can look like strong evidence of effectiveness when the broader picture is more mixed.

Both failure modes are common. The solution isn't sophisticated mixed-methods research — it's intentional design.

A practical combined approach

For most grant programmes, a simple combination works:

Quantitative layer: A short survey administered to participants at or shortly after the programme activity. Five to ten questions, Likert-scale or multiple choice, gathering data on self-reported knowledge, confidence, or behaviour change. This is scalable — grantees can administer it themselves — and produces aggregable data.

Qualitative layer: A small number of in-depth case studies — three to five across the grantee portfolio — exploring one or two participant stories in depth. These can be collected by programme staff (a 30-minute phone interview) or by grantees who are interested and capable of doing them.

The quantitative layer tells you about reach and scale. The qualitative layer tells you about mechanism and significance. Together they produce a more complete picture than either alone.

When to invest in more rigorous evaluation

More rigorous approaches — randomised controlled trials, quasi-experimental designs, independent longitudinal evaluation — are appropriate when:

  • The stakes of the decision are very high (large public investment, significant policy implications)
  • There is genuine uncertainty about whether the programme model works
  • You need to make a causal claim (not just "participants improved" but "this programme caused the improvement")

For most community grants programmes, this level of rigour isn't warranted. The cost of a rigorous evaluation often exceeds the value of the certainty it produces.

A proportionate approach: invest more in evaluation for novel or high-value programmes, less for programmes with an established track record of delivery.

Presenting impact evidence to different audiences

The same evidence needs to be presented differently for different audiences.

Board and governance: Lead with the numbers, support with one or two stories. Keep it brief. Highlight what changed compared to last period. Focus on whether the programme is achieving what it set out to achieve.

Funders: Demonstrate that your measurement approach is credible and proportionate. Show that you're learning from evidence and adapting. Include both the data and the human context.

Public and sector: Stories first, numbers to support. People connect with human narratives; numbers provide credibility. Lead with what changed for a specific person or community, then provide the scale of impact across the programme.

Your own team: All of it, including what isn't working. The internal use of impact evidence is the most important and the most neglected. Evidence that only gets used for external reporting doesn't improve the programme.


Part of the Tahua grants management series

This article is part of the complete guide: What Great Grant Outcome Reporting Looks Like.