Moving from Evidence to Action: Tackling the Sector’s Thorniest Issues By Rethinking our Relationships with Data

Evaluation, at its best, helps us to innovate, celebrate our successes, and share them with others. It forces us to be honest about what isn’t working and make change. It helps us to convince others to get involved in the work we do. In short, it leads to action. At the same time, many of us have had experiences when we put a lot of effort into measurement and evaluation, and it did not lead to action.

So, what makes the difference? How do we get from evidence to action?

One possible answer is suggested in Cathy O’Neil’s book Weapons of Math Destruction. O’Neil looks at the role of information and analysis in our lives and tries to identify the things that differentiate healthy, meaningful data use from misuse. She notes that the world of professional baseball uses data well: they measure every aspect of their game, and they love to share and discuss what they have measured. Crucially for O’Neil, professional baseball experts also use their data in a meaningful way. They check each year to see how well their models predicted team success. If their algorithms didn’t do well, they change their measurement strategy. It is a living, self-correcting measurement system.

It is also a transparent system in which the data is readily available to a variety of stakeholders. Fans have websites where they conduct their own analyses, sometimes reaching different conclusions than the teams. It is a system within which people are constantly being reminded that their data has limitations and does not tell the entire story.

O’Neil also looks at systems in which the algorithms used to collect data are not used in these healthy ways. She talks, for example, about the risk assessment tools used to decide if incarcerated people will re-offend. Although there are many criticisms of these assessment tools, they rarely change. In fact, the formula used to calculate risk is often proprietary and secret. No one checks back regularly to see if the assessment accurately predicted what would really happen, and if it does so fairly across culture and time.

Over time, the algorithm takes on a life of its own, and we come to equate the score churned out by the test with risk. The mechanistic objectivity of the algorithm comes to be seen as a strength. Data collection becomes an end in itself and not a means toward better understanding, common purpose, and more informed action. It is as if a baseball team continued to use the same assessment system to draft new players every year, without ever checking to see whether those players ended up doing well.

At a technical level, defining and measuring success for baseball teams is much more straightforward than evaluating the impact of efforts to address social issues like reintegrating people who have been incarcerated. However, there is no reason why we can’t emulate the way baseball people relate to their data.

Here, we offer four ideas for how to relate to evidence in healthy ways that lead to action.

  1. Make learning and action the focus. One of the simplest ways to make sure that evaluation leads to action is to start and end conversations about evaluation with a focus on learning and action. This may sound obvious, but in too many meetings all the energy is devoted to the process of measurement. People talk about the merits and limitations of the survey tool, or reasons to switch to different survey tools. They list the problems with existing data and convince themselves that there is no point trying to share data or learn from it until they address a long list of them. Then a year goes by without anyone hearing about what was done with the data, and people gather again to review problems with new data.

Imagine Canada recently launched the first State of Evaluation in Canada report. One of the key findings spoke to the different ways the sector uses evaluation. In particular, the survey found:

  • 88% of charities use their evaluation findings to report to their board;
  • 81% use them to report to funders;
  • 70% use them to revise existing programs or create new ones;
  • 61% use them to inform their organizational priorities;
  • 53% share findings with the people they serve; and
  • 33% share them with partner organizations (Lasby 2019, 7).

These results are encouraging insofar as they suggest that charities in Canada use their data in a wide variety of ways. At the same time, the sector could be making better use of its data.

One of the obstacles to action is the myth that data must be super rigorous for it to inform action. In fact, you can always learn something from data, even if the data is limited. Even if you do nothing more than articulate why the data failed to answer your key questions, you still deepen your understanding of those questions.

Recently, Taylor Newberry Consulting worked on an evaluation project. The funder, RBC, asked us to use its standard survey. We added a few extra questions and collected some qualitative data on our own. When the time came to write up the results, we didn’t have as many completed RBC surveys as we hoped. It was hard to draw too many firm conclusions based on the data, but we wrote up the findings and shared them with RBC anyway. Drawing on our qualitative data, we tried to name the reasons for the challenges we were facing and identify ideas for how we could do better the next time. We made learning and action the focus, despite the limitations of our data.

Often, when talking with funders, the conversation begins with a focus on what change will be achieved and how that change should be measured. As important as those questions are, they are difficult to answer properly if we don’t start and end with a shared understanding of why we are gathering the data and what we hope to do with it.

  1. Work together. Evidence from evaluation projects is more likely to lead to action when it is considered alongside other information. We should be looking at academic research and learning from case studies and examples. We should be listening to the people we serve, especially when they have experienced marginalization or oppression. We should create space for them to share experiences and analysis in the form that is most meaningful to them.

Just as we would never draw broad conclusions about a child’s learning based on one test score, we should never be over-reliant on one data source or one stakeholder’s viewpoint when seeking to learn and take action on social issues. When we gather information from various sources in a variety of ways, it also helps us to better understand our evaluation data: we don’t rely on a small amount of data to answer big questions. More importantly, when we value diverse information, we also value different voices and points of view.

There are many ways to work together on translating evidence into action. For example, knowledge sharing is a core function of the Counselling Foundation of Canada’s reporting processes. The Foundation, which focuses on career counselling and development, encourages its grantees to share tools, tips, or publications. These can be viewed publicly on its website. Its grantees are also encouraged to share their work at Cannexus, an annual conference for career development professionals that the foundation supports.

  1. Ask bold questions. Sometimes, evidence does not lead to action because it tells us things we already know or strongly suspect. We may look at the results of evaluation work, shrug, and say “yeah, nothing to see here, everything looks fine, let’s move along.” In fact, we sometimes feel like an evaluation has succeeded when this happens. It is as if we are saying “we went and had a look, and everything was fine – can we get back to work now?”

Yet, the best evaluations should lead us to say, “that’s amazing, we should tell everyone” or “I never thought about that before; we should give that a try.” Or even “that makes absolutely no sense.” When it is well designed, an evaluation should teach us things that we don’t already know.

The team at the UK’s Lankelly Chase foundation, which aims to “change the systems that perpetuate severe and multiple disadvantage” — focusing on interconnected issues like poverty, homelessness, drug use, and violence — has taken a very deliberate approach to how and why it asks big questions. They focus on “system behaviours,” which are described as “the core behaviours that help systems function better for people facing severe and multiple disadvantage.” In their theory of change, they state: “All our actions are designed to test, understand and promote the system behaviours. Rather than seeking to impose them, we’ve learnt that the most effective way to work with complex systems is to develop open and powerful questions as the basis for collective inquiry” (Lankelly Chase 2018, pg. 7).

Lankelly Chase’s approach includes defining its own assumptions about perspective, power, and participation and how each influences systems change. It also includes five reflective questions that guide its work. Examples include: “How can we promote a more critical approach to understanding the interlocking nature of severe disadvantages?” Finally, it outlines big picture indicators for it to measure itself against, such as “People are free to embody the system behaviours” which may include indicators like “work has increasing relevance to policy development; the system behaviours are translated into institutional processes that have credibility with decision makers” (Lankelly Chase 2018, pg. 8).

Of course, asking bold questions can generate results that make us look bad. It isn’t always easy or comfortable to talk about failure, particularly when we are trying to tackle serious societal issues like inequality, poverty, or homelessness. This is why both funders and non-profits have to create safe spaces for learning, be open about failure, and learn from what didn’t work as much as from what did.

  1. Build your organizational learning muscles. Whether you work for a funder or a non-profit, it takes time to get good at learning. Developing a strong learning culture sets the stage for evidence to lead to action. As noted by The Center for Nonprofit Excellence (2016), “Learning cultures take organizations beyond an emphasis on program-focused outcomes to a more systemic and organization-wide focus on sustainability and effectiveness.”

According to our research, learning organizations have the following qualities:

  • Learning-oriented organizational habits and behaviours (i.e., formal and informal day-to-day practices, processes, and attitudes) that bring a learning culture to life;
  • Strong leadership and strategic direction that supports, provides guidance, and prioritizes learning for staff and organizational processes; and
  • Resources and capacity to support learning, including tools for analyzing data and reflecting on it, time and space devoted to learning, and good communication processes (Taylor & Liadsky 2018).

As an example, the Pillar Nonprofit Network employs a deliberate, ongoing process for thinking about its organizational culture and learning.[1] At Pillar, formal professional development opportunities are budgeted for and encouraged. At staff meetings, sharing and learning from failure is part of the agenda. Additionally, Pillar welcomes and supports mentorship and teambuilding. Externally, Pillar learns from its member organizations and cites peer-to-peer educational opportunities as one of the main reasons they join its network. It regularly encourages its members to exchange ideas through workshops, conferences, and online tools (Taylor & Liadsky 2018, pgs. 34-35).

There is no one approach to developing a strong organizational learning culture; what works for one organization may not work for another. It is important to think about how we support one another to develop good learning habits and goals. This Organizational Learning Self-Assessment Tool can help surface the different factors that contribute to a learning culture. We hope this tool encourages conversations about how to create the conditions for better, more intentional learning and evaluation.

In conclusion, it’s important to remember that data alone is not transformative. You can have the best data in the world, but if you don’t have people committed to using it, it will sit on a shelf. No one ever speaks wistfully of the bar chart that changed their life.

Tightly controlled measurement processes decrease the risk of surprises. At the same time, they tempt us to forget that all measurement is imperfect. The four tips we have presented here are not the most efficient way for us to create compelling soundbites for our colleagues in the marketing and fundraising departments, but we know that surprises are a good way to maximize opportunities for action. When we focus on learning, work together and ask brave questions, we cultivate a healthy relationship to data that treats it as a catalyst for inclusive dialogue, disruption of the status quo, and true innovation.

 

Acknowledgement: This article is adapted from a talk given at an Impact Measurement Community of Practice event hosted by RBC.  Our work on organizational learning has been supported by the Wellspring Philanthropic Fund.

 

References

The Center for Nonprofit Excellence (2016). What’s a Learning Culture & Why Does It Matter To Your Nonprofit? Retrieved from https://www.centerfornonprofitexcellence.org/news/whats-learning-culture-why-does-it-matter-your-nonprofit/2016-5-11

Edmonson, A. (2011). Strategies for Learning from Failure. Retrieved from https://hbr.org/2011/04/strategies-for-learning-from-failure

Lankelly Chase. (2018). Our Approach to Change. Retrieved from https://lankellychase.org.uk/wp-content/uploads/2018/04/Our-Approach-To-Change-1.pdf

Lasby, D. (2019). The state of evaluation in Canada: Measurement and evaluation practices in Canada’s charitable sector. Retrieved from http://imaginecanada.ca/resources-and-tools/research-and-facts/state-evaluation-can

O’Neil, C. (2017). Weapons of Math Destruction: How big data increases inequality and threatens democracy. Broadway Books.

Taylor, A. & Liadsky, B. (2018). Achieving Greater Impact by Starting with Learning: How grantmakers can enable learning relationships at the grant application stage. Retrieved from https://theonn.ca/our-work/our-people/evaluation/starting-with-learning/

[1] The Philanthropist recently profiled Pillar Nonprofit Network in its series on the role of networks in the non-profit sector https://thephilanthropist.ca/2018/08/a-sense-of-place-and-the-potential-for-connection-how-geographic-networks-address-local-challenges-and-build-stronger-communities/

Subscribe

Weekly news & analysis

Staying current on the Canadian non-profit sector has never been easier

This field is for validation purposes and should be left unchanged.