Just one in four “grantseeker outcomes” is an outcome, analysis reveals

Just one in four “grantseeker outcomes” is actually an outcome, an analysis of data entered into the SmartyGrants Outcomes Engine has revealed.

Grantseekers are most commonly mistaking “outcomes” for “activities” and “objectives,” the analysis of 664 grantseeker outcomes shows.

Jen Riley
SmartyGrants chief impact officer Jen Riley

The conclusion was reached by SmartyGrants chief impact officer Jen Riley, who was granted access to data held by nine funders who have been using the Outcomes Engine to collect data from their grantees. Outcomes Engine users can choose from a series of outcomes-focused “standard sections” to collect data from their grantees. Jen’s analysis focused on the “grantseeker outcome” field, which is designed to collect information during the application phase about prospective grantees’ outcome goals.

“From the nine grantmakers, we reviewed 664 grantseeker outcomes from across 19 funding rounds and 613 applications,” Jen said. “Of the 664 grantseeker outcomes, only 170 (25.6%) were outcome statements.”

SmartyGrants defines an “outcome statement” as one that states a measurable change intended for the beneficiary of a project – for example, “Increase in daily exercise for year 10 students”.

“Outcomes are the changes you expect to occur for the beneficiaries of your project,” Jen said. “Generally, outcomes can be framed as an increase or decrease in one or more of the following:

  • Skills, knowledge, confidence, aspiration, motivation (these are generally immediate or short-term outcomes)
  • Actions, behaviour, change in policy (these are generally intermediate or medium-term outcomes)
  • Social, financial, environmental, physical conditions (these are generally long-term outcomes).”

The results of the analysis and reclassification of the 664 grantseeker “outcomes” is shown in the graph below.

Graph for 1 in 4

“Activities” were most commonly mistaken for outcomes, comprising 49% of the data set.

“I had expected to see more ‘outputs’ in this field, but these accounted for less than 2%,” Jen said.

“Objective statements” accounted for almost 10% of the sample.

“Objectives are what you, as an organisation, aim to do, not the change for the participant,” Jen says.

“There are outcomes hidden within these statements – we just need to reword them so that they are outcome statements that can be measured.”

(See box, right, for an explanation of how “objectives” can be converted to measurable outcomes.)

Almost one in 10 of the grantees’ “outcomes” was reclassified as “program administration” – e.g. “submit report”.

“The remainder of the data consisted of a mixture of ‘data collection methods’, ‘justification statements’, ‘mission statements’, ‘principles’, ‘program theory explanations’, ‘descriptions of projects’, ‘stages/milestones’, ‘targets’, and some outcome statements that required a lot more work,” Jen said.

How to spot a “not-outcome”

Jen’s analysis revealed that many of the “not-outcomes” that grantees mistook for outcomes started with or included one of the following words:


“As a rule of thumb, if a statement starts with one of these words, it is probably not an outcome,” Jen said. “It’s likely what you’re describing is an ‘activity’ (the thing you are doing with and for your beneficiaries to achieve outcomes), or ‘program administration’ (the things you do to deliver a project).

“If it has a percentage in the statement, it is probably a target or a measure.”

Jen found other hints in the data that may help grantmakers guide their grantees towards clearer outcomes statements.

“If a statement starts with a word like ‘enhanced’, ‘reduced’, ‘increased’, ‘improved’ or ‘better’, then the grantee is probably heading in the direction of an outcome, because generally they are working with people’s behaviour/actions, skills, knowledge, health/housing/employment conditions, attitudes/values or motivation/awareness – and in outcomes land we are often wanting one of these to go up or down,” she said.

Jen said that even some of the 170 outcome statements found in the data were problematic.

“Nearly 90% of them contained double, triple and sometimes four or five outcomes within the one statement,” she said.

Examples included:

“Volunteers gain confidence, learn work & life skills, form new friendships and enjoy meaningful activities that help the community”

“Women report increased skills, confidence, and sense of belonging after joining training”

“Youth will experience increased engagement with education, reduced absenteeism and increased academic achievement”

The first example shown above should be re-expressed as four separate outcomes:

  • Volunteers gain confidence
  • Volunteers learn work and life skills
  • Volunteers form new friendships
  • Volunteers enjoy meaningful activities that help the community.

“It’s understandable that not-for-profits working towards change draw together a number of concepts about the outcomes they’re helping to create. However, as an evaluator those statements make my head hurt! Each outcome needs to be about one concept so we can develop a measure for it.”

Of the 688 grantseeker outcomes analysed, only about 3% were outcome statements that clearly articulated an intended change that could be measured, Jen said.

“This is not hugely surprising. This is a language and approach that has become the domain of consultants and experts.

“This is why the Outcomes Engine has been set up – to democratise access to outcomes measurement methodology and to help grantseekers align their outcomes to grantmakers’ outcomes. The grantmaker’s outcomes act as a prompt for the type of change the grant program is trying to achieve and provides a space for grantseekers to explain how their work aligns.

“Together the ‘grantmaker outcomes’ and ‘grantseeker outcomes’ fields work to ensure alignment, tell the story of collective impact, and help ensure grantseeker outcomes and community voices are central to the conversation.”

Find out more

Learn more about the Outcomes Engine

Want to learn more about tracking outcomes as a grantmaker? Outcomes Engine users get access to eight hours’ free support from Jen and her team. Find out more by emailing service@smartygrants.com.au with “Outcomes Engine” in the subject line.

Ask Jen more about outcomes and evaluation

SmartyGrants’ chief impact officer Jen Riley has more than 20 years’ experience in the social sector, having worked with government and large not-for-profits of all kinds in that time, and been part of leading firm Clear Horizon Consulting. She’s a specialist in social sector change with skills in strategic planning, program, and product design and management. If you’ve got a pressing question about evaluation and outcomes measurement, ask here! You'll find the answers on the SmartyGrants forum (available to SmartyGrants users)