The Goldilocks Challenge: Right-Fit Evidence for the Social Sector, by M.K. Gugerty and D. Karlan. Oxford University Press. New York, 2018; 312 pp: ISBN 9780199366088
Mary Kay Gugerty and Dean Karlan’s The Goldilocks Challenge: Right-Fit Evidence for the Social Sector is a book that all social, public, and impact investing sector stakeholders should read.
It successfully sets out to help social sector organization leaders answer the question, “What data should I collect to improve my programs?” This isn’t just another one of the increasing plethora of “best practice” guides. Rather, it’s a framework for other frameworks; it provides a clear framework that organizations can use to navigate through other existing and emerging frameworks, many of which show only a piece of the puzzle and can lead to gathering insufficient or inappropriate data. Social sector organizations will find an excellent resource that illuminates how to efficiently collect data to improve their programs without wasting already limited funding, and without running expensive randomized control trials.
This timely publication contributes to ongoing debates about how, with limited resources, social sector organizations can measure and manage their impact. As the authors acknowledge, these debates arise amidst relatively recent pressures and trends pushing for more impact evaluation data. Recent technology has made collecting and analyzing data easier and cheaper. Global coordination to set and achieve sustainable development goals is increasing. Wealth is increasingly changing hands, transferring to a younger generation that wants to know that each dollar creates measurable impact.
Gugerty and Karlan rightly observe that more data isn’t necessarily better. While some organizations collect too little data, others collect too much, and still others collect the wrong data. All three situations result in wasted time, efforts, and funding that could otherwise have helped more people.
The push for more impact evaluation data requires “right-fit” data collection systems – analogous to Goldilocks finding her right-fit porridge. This ensures that limited social sector resources are not wasted, and that appropriate data and analysis inform decision-making to ensure organizations correctly understand and achieve their intended impact.
The push for more impact evaluation data has also, in some cases, inadvertently led to overlooking the importance of program monitoring data – activities and outputs. Historically, most social sector organizations focused more on collecting program monitoring data than impact evaluation data. Hence the current push for more impact evaluation data is garnering new and much needed attention. However, the need for impact evaluation data does not outweigh the need for program monitoring data. Gugerty and Karlan note that program monitoring data multiplied by impact evaluation data equals a comprehensive understanding of an organization’s social impact. The best ideas implemented poorly are of limited value; the same goes for the best implementation of poor ideas. Program monitoring data is just as essential as impact evaluation data to determining an organization’s social impact.
Today there are increasing numbers of “best practice” impact measurement and management shops and frameworks, each with different branding. Many overlook the importance of program monitoring data in their quest to put impact evaluation data on the map. Gugerty and Karlan justifiably contend that program monitoring data that provides insights into an organization’s accountability and performance management can be far more valuable than data from a poorly run impact evaluation. They suggest that social sector staff should first be confident that their program implementation is happening with fidelity to its intended design before they even consider conducting an impact evaluation. After all, if an organization doesn’t have an accurate and comprehensive picture of how its program is being implemented relative to design, how could it assert the work is tied to the purported impact?
Fundamentally, the “right-fit” data collection system requires a framework for navigating what program monitoring and impact evaluation data to collect. Gugerty and Karlan provide this framework. They suggest four “CART” principles to assist with making decisions on what data is just right and enough: Credible; Actionable; Responsible; and Transportable.
- Credible means collecting and accurately analyzing only high-quality data;
- Actionable means only collecting data that an organization can commit to using;
- Responsible means collecting data only when the benefits outweigh the costs of doing so; and
- Transportable means limiting data collection to that which will generate knowledge for other programs.
The authors recommend that all social sector organizations, regardless of size and wealth, employ the CART principles in their everyday decision-making around data collection. The CART principles help explain why randomized control trials are not a panacea, nor the organizational ideal.
The CART principles may sound simple – perhaps because the authors summarize them with concision and eloquence. But effectively and practically applying them requires adeptness at defining precisely what “accurate,” “high-quality,” “commitment,” “costs,” and “benefits” mean in sometimes very complex situations. With painstaking detail, Gugerty and Karlan dedicate much of their publication to numerous case studies about how organizations have employed the CART principles, the challenge they faced throughout, and how they overcome them.
For instance, the authors illustrate how applying the responsible principle would have saved a program officer with Salama SHIELD Foundation’s microcredit program a lot of time and money. The program officer wanted to understand why the microcredit program achieved high repayment rates. He conducted focus group discussions and spent much time revising the program’s data collection forms. Realizing that not all questions would inform decision-making at his organization, he decided to collect only data that would likely inform action, in line with the actionable principle. However, he hadn’t considered the responsible principle; program officers and staff needed to enter and analyze data were too busy and, ultimately, the revised data entry forms went unused.
Most of the book deals with the CART principles, how a theory of change can serve as a strong foundation for a right-fit data collection system, how to collect high-quality data, and how to monitor program implementation and evaluate impact with the CART principles. This publication effectively demonstrates how organizations can set up, or course correct, their right-fit data collection systems.
While the CART principles provide a clear and articulate decision-making framework for determining what data to collect to improve social programs, they can only be helpful if data priorities between social sector organizations and their funders align. When funding contracts mandate that social programs collect specific sets of data that do not belong in “right-fit” data collection systems, social organizations’ hands are tied.
Gugerty and Karlan acknowledge that funders may have different informational priorities than those of the organizations they fund. Two short chapters at the end of the publication explore why there are often misaligned interests between donors and the social programs they fund. Gugerty and Karlan suggest that the CART principles provide a common language and framework that donors and organizations can use to have productive conversations about what data to collect and when.
But overcoming this challenge will require critical efforts. None of the book’s case studies showcase what happens when the information funders want to collect is incongruent with the data needs of organizations. Misaligned interests and priorities can help explain why much of the social, public, and impact investing sectors face challenges in identifying “right-fit” data measurement and management today. In Canada, for instance, many funders of social organizations – including government and philanthropists – have historically required social programs to collect and report on funders’ preferred activities and outputs as part of funding contracts.
Collecting and reporting on funder-specified data, if incongruent with organizations’ data needs, can mean little to no bandwidth is left to collect what organizations need to monitor programming and evaluate impact. The overemphasis on collecting impact evaluation data today, as seen in the emergence of pay for performance contracts, is a relatively recent trend, perhaps to overcompensate for the previous lack of focus on impact evaluation data.
Funders hold the power to determine what data social organizations must collect as a condition of much needed funding. As much as an organization may wish to, and could appropriately, apply the CART principles, there isn’t much it can do if its funders continue to dictate data requirements that do not belong in “right-fit” evaluation systems.
A common language and framework for conversations is simply a starting point. Such conversations must lead to action to realign data interests between funders and social sector organizations. This requires a cultural change to create sectors in which it is a baseline norm for those who seek to help others to regularly confirm that what they believe is “help” is in fact helpful. Just as it is important for social programs to make sure that their help is desired by, and helpful to, target populations without causing inadvertent harm, it is equally important for funders of social programs to ensure that their funding is helpful without unintentionally creating critical inefficiencies.
Strategies and frameworks that help organizations navigate misaligned data priorities with their funders and vice versa may warrant another publication. For now, the CART principles provide a common language and framework for funders and organizations to determine what data to collect most efficiently and effectively once their data interests are aligned.