More data is not the answer to knowing what works

Posted on 27 Nov 2024

By Jen Riley, chief impact officer, SmartyGrants

Lightbulb explosion shutterstock 2280735495

As the chief impact officer at SmartyGrants, I’m always on the hunt for better ways to measure and improve impact, and a recent commentary from two of the world’s leaders in the field got me thinking.

In an article published in Singapore’s Straits Times, Dr Jean Liu and Maryanna Abdo of the Centre for Evidence and Implementation suggest that the global decline in giving to charities has coincided with declining trust in those institutions.

They suggest that the answer to that problem is “better data”.

They are not alone in the view. Their suggestion that better quality data will ensure funds are directed to the most effective solutions and away from activities that are less efficient is one repeated across the globe.

This argument seems good in principle. But demonstrating what truly works and what doesn’t is not straightforward.

Jen Riley
SmartyGrants chief impact officer Jen Riley

My view is that “better data” is a red herring. While not-for-profits often reflect deeply on their work and know what is and what isn’t working, they are constrained in what they can share.

In one case, I learnt from a CEO that surprising new information had revealed that the CEO’s organisation should radically change how it accepted vulnerable new clients, and hire much more experienced staff for that process.

But when I asked the CEO why she wouldn’t share this discovery with others in the sector, she replied, “Admitting this looks like we don't know what we are doing."

In short, the organisation didn’t want to look bad to its funders.

Leaders fear that sharing what they’ve learnt, including failures, can tarnish a not-for-profit’s reputation and have financial repercussions.

Instead, not-for-profits feel compelled to overstate their success and under-report their struggles, out of fear that being too honest could result in reduced funding.

The pressure to present only positive outcomes leads to unrealistic reporting.

I’ve seen cases where data presented to funders was so consistently positive – including claims such as 100% achievement of outcomes – that while it was impressive, it was disconnected from reality. The pressure to present a flawless picture of success can distort the very data we rely on to build trust.

"Ultimately, the call for “better data” should be reframed as a call for “better culture” in which we foster a culture of learning, adaptation, and honest reflection."
Andrew Leigh
Andrew Leigh

I therefore understand the push by Australia’s Charities Minister, Andrew Leigh, for greater use of randomised controlled trials as a way of putting some scientific rigour around impact data. However, while RCTs can offer objective insights into some interventions, they are not a perfect fit for every social challenge. They require significant resources, can be slow to yield results, and are often difficult to apply to complex social issues where multiple factors are constantly in flux.

For example, in many communities, such the City of Greater Dandenong, which is home to a high number of refugees and asylum seekers, the systemic issues affecting outcomes such as post-conflict trauma, socioeconomic disadvantage and access to work rights make simple data points insufficient for understanding true impact.

Another example of the complexity of social problems can be seen in First Nations communities, where individuals are twice as likely as non-Indigenous individuals to die young from preventable causes. For example, factors such as “sorry business” associated with the death of family members can prevent participation in programs that might otherwise help close the gap in life expectancy.

These factors are not failures of the not-for-profit’s program, but reflections of life in such communities. Labelling such outcomes as failures oversimplifies the real challenges that community organisations face. It unfairly shifts the responsibility for systemic problems onto not-for-profits rather than acknowledging the social and economic context they operate in.


Jen Riley’s gave this summary of the value of impact data and the story that goes with it. She was speaking at the Gather for Good event in Melbourne in support of social enterprises. Source LinkedIn


The push for better data as a way to address declining trust in institutions oversimplifies these challenges. And it won’t build trust or improve outcomes if we don’t also create an environment where funders and not-for-profits can engage in honest, open communication about both successes and setbacks. Trust is built when not-for-profits can candidly share the obstacles they face, without fear of being penalised for reporting less-than-perfect results.

Gathering more data won’t solve any problems if that data is curated to obscure any hint of failure or complexity.

Many years ago, I worked for Oxfam, where a large percentage of funding was from the Australian public. This funding was referred to internally as “unrestricted”, as opposed to funding from government grants and bilateral donors, which was referred to as “restricted” – its use was limited to a particular program with acquittals attached. The dynamic of unrestricted funding (where we were not reporting back to funders) was associated with a robust monitoring, evaluation and learning culture whereby true accounts of outcomes emerged and programs adapted accordingly. This culture is missing inside organisations when 80–90% of funding is “restricted”.

A focus on “polished” data and evidence is a missed opportunity for genuine dialogue between not-for-profits, philanthropists and government agencies. Not-for-profits have deep insights into the communities they serve, and fostering open conversations about both successes and challenges could educate funders on the real nuances within these communities. Using data to generate “perfect” results is a missed opportunity for genuine learning.

Data can be a tool for educating funders on the complex social, economic and cultural factors at play. Using data in this way can shift the focus from simply measuring outcomes to understanding the deeper context that affects those outcomes, enabling more meaningful, long-term impact.

What not-for-profits really need is a culture where they feel safe to share their full range of experiences – both their successes and the systemic barriers that prevent them from fully achieving the outcomes they seek. We must avoid a situation in which funders want only positive stories, and we must create an environment of learning and improvement. Open, honest dialogue should be the goal.

Many of the funders I work with understand that not-for-profits operate in incredibly challenging environments, where outcomes may fall short for reasons beyond an organisation’s control. They are genuinely interested in hearing about both successes and setbacks. We need to break the current deadlock; otherwise, more data will simply lead to more polished, yet disconnected, success stories.

Ultimately, the call for “better data” should be reframed as a call for “better culture” in which we foster a culture of learning, adaptation, and honest reflection. Only in this way can we move beyond superficial metrics and work toward real, sustainable change.

More news

Sign-up to our newsletter