Evaluation: The Elusive Frontier

The purpose of this articles is to review some aspects of evaluation from a practical rather than a technical viewpoint, and to look at recent trends and some resulting challenges to successful evaluation from the perspective of a funder.

There has been a lot of interest in evaluation lately, perhaps with some expectation that evaluation will provide solutions to issues that had not been resolved when evaluation was not seriously considered part of the grantmaking agenda. I will caution that at the Trillium Foundation we have found no easy answers. We have found that our commitment to evaluation has caused us to ask questions and changed the way we think about the whole idea of evaluation.

The Concise Oxford Dictionary defines “evaluate” as “assess, appraise, find or state the number or amount of, find a numerical expression for”. In a strict sense, it means putting a value on something, and of course, values are relative. One helpful definition is that “evaluation is the process of asking good questions, gathering information to answer them and making decisions based on those answers.”1

There is also a good overview of evaluation in the Council on Foundations’ Evaluation for Foundations:2

Good evaluation is systemic, disciplined inquiry. Beyond that, what evaluation is and does varies enormously. An evaluation can entail a site visit by a solo consultant, a phone survey… or rigorous scientific research conducted by a team of academics. It can cost a few hundred dollars or several millions. It can be conducted by foundation staff, grantee staff, outside consultants, or by another independent grantee or contractor. It can serve the interests of the foundation that sponsors it, of grantees, policymakers, practitioners, and social scientists.

Evaluation has been confused with accountability. The difference is simple but the implications are huge. Accountability relates to the funders’ need to ensure that grant recipients did what they said they would do and used the funds for the purpose specified. Every grant recipient must be accountable.

Historically, evaluation has been about what the grantee achieves with each project. This has been a limitation funders have tended to put on evaluation. If you see the results of the grants as ends in themselves, rather than means to an end, then you need to be concerned only with the value of each grant without reference to a broader framework.

In the past, funders have made granting decisions based on need. The greater the need, the easier it was to direct money to it. We’ve all looked for the needs analysis, the number of clients served, volunteers involved, workshops held, etc. as indicators of how successful a program was. We are now understanding that we should also have been asking what difference the money made so we’ve shifted towards “outcome” evaluation and are currently struggling to find ways of measuring outcomes.

Increasingly funders talk about more “holistic” and “community based” approaches to working with people. It becomes difficult to identify which intervention had the desired impact and which other conditions are necessary for it to be repeated. Prevention seems to be the way to go. One perspective on prevention, “Primary Prevention: A Cynic’s Definition”,3 nicely captures the evaluation challenges:

Primary prevention deals with problems that don’t exist, with people who don’t want to be bothered, with methods that probably haven’t been demonstrated to be efficacious, in problems that are multidisciplinary, multifaceted, and multigenerational, involving complex longitudinal research designs for which clear-cut results are expected immediately for political and economic reasons unrelated to the task in question.

So, in this environment, has the Trillium Foundation abandoned all hope for evaluation? Far from it. We think we are thriving as an organization because we are so committed to evaluation. What changed our view? First, we thought long and hard about what had not worked in the past. Evaluation was about the grant recipients and did not relate to our organization. We required accountability and called it evaluation, largely for the purpose of deciding whether to continue payments under a grant and as a basis for future grant decisions based on past performance. Not a bad idea, but that approach did not encourage grant recipients to tell us much we could not have predicted ourselves.

Next we researched other foundations’ approaches to evaluation. We talked to them, collected their resources and bibliographies, and learned from their experiences. One of the most influential resources we came across was the Independent Sector’s report, A Vision of Evaluation.4It presents a vision of evaluation as a continuous learning process for organizations in all dimensions of their organizational life. Here are some components of this vision:

The output of evaluation is organizational learning-it is how an organization assesses its progress and achieves its mission.

• Evaluation is everyone’s responsibility. Everyone in the organization should be involved in asking the key question, what can we do to get better?

Evaluation addresses both internal effectiveness and external results.

• Evaluation is not a post-grant report event, but a continuous development process integrated into the planning and day-to-day activities of the organization.

• Evaluation invites collaboration within the organization and with external parties including clients, donors, and grantees.

Simple, cost-effective, user-friendly evaluation methods are available for use.

The biggest shift we made in our thinking was developing a clear understanding of what is being evaluated and why. It is directed to both our own organization and the grants we make. As an organization, we have a vision and some goals. We want to be accountable in this regard and we need evaluation to do this. We have an organizational planning system that uses evaluation and helps us make changes on a regular basis. We use feedback, solicited or not, and check every decision for compatibility with our vision and values. Sometimes, it’s a demanding process (analyzing 1,200 survey responses), sometimes it is just board and staff observations. And we listen. In our strategic planning process, we spent some 1,000 hours in a consultation process with diverse groups and individuals. When we launched our new program in 1994, we surveyed all 1,200 applicants and got an astounding 72 per cent response rate and we used this feedback to make changes to the following year’s Guidelines. Our understanding about our program and the world about us changes on a daily basis as we try to keep our program relevant to the needs of the people who use it.

As for the grants, while we need accountability from all grant recipients, the value of what we fund varies and so we need different levels of evaluation from them. We have developed evaluation levels and assign them to each grant. A grant might have a lower value because it may not further our goals, but if the expected value has merit, we will require accountability and ask statistical questions but limit the evaluation component, unless we are very clear about how we will use that information. We are approving very few grants where this is the case but a project that has model-building capacity or will challenge current thinking about an issue, resulting in major changes, will have evaluation built in and we may fund it if much depends on the outcome. In most cases, we think the grant recipients have the capacity to evaluate their own programs.

So, the lesson we learned is that evaluation is alive and well but it is a balance between an art and a science. It’s not a destination but a journey which requires a fine balance between knowing where you are going and creating the road map at the same time. It is more about rigorous thinking than it is about rigorous methodology; it is about interpreting data, rather than answering questions about a program’s worth; it is a tool for improving organizational efficiency for funders and grant recipients.

Here are some challenges we see as a result of what we have learned about evaluation:

a) If evaluation for funders of human services is more of an art than a science, how do we balance the need for quantitative and qualitative information? There is now more tolerance for development approaches and for using evaluation as a learning tool but we know that statistics form an important component of a credible evaluation strategy.

b) In the age of shrinking institutions and resources and increasing community involvement in the resolution of issues, how do we build evaluation capacity in a useful way? What roles do evaluation professionals play in building greater community capacity?

c) We know that systems change is a very effective strategy for overcoming barriers. This is a relatively new approach, is risky, and probably requires many years of funding in an environment where factors such as political support, community participation and highly motivated stakeholders need to be stable. With the move to outcomes funding, the process for these changes is perhaps not being as well tracked as it should be, making it difficult to develop replication strategies.s

d) I recently heard someone suggest that for model-building purposes, 98 per cent of activities funded don’t meet the experimental design conditions needed successfully to replicate a project. The problem is not with the design conditions, it’s with the assumption that the rigorous laboratory conditions required for scientific experiments can be replicated when we are working with people.

Finally, perhaps we should keep in mind a whimsical story by Michael Quinn

Patton, who has consulted with many U.S. foundations:6

In the beginning God created the heaven and the earth. And God saw everything that

He made. “Behold”, God said, “it is very good…”

And on the seventh day, God rested from all His work. His archangel came then unto Him asking, “God, how do you know that what you have created is very good? Toward what mission was your creation directed? What were your goals? What were your criteria for what is very good? On what data do you base your judgment? Aren’t you a little close to the situation to make a fair and unbiased evaluation?”

God thought about those questions all that day and His rest was greatly disturbed. On the eighth day, God said: “Lucifer, go to hell.”

FOOTNOTES

1. A Vision of Evaluation- A report on the lessons learned from Independent Sector’s work on evaluation, pp. v, vi.

2. Jossey-Bass Inc., San Francisco, ISBNl-55542-541-0, p.2.

3. M. Bloom, “Primary Prevention: The Possible Science”, quoted in The Challenge of Evaluating Prevention Programs: An example from Child Sexual Abuse, Evaluation Methods Sourcebook II, Ed. Arnold Love, Canadian Evaluation Society, p. 104.

4. Supra, footnote 1.

5. Harvard Family Research Project. The Evaluation Exchange: Emerging Strategies in

Evaluating Child and Family Services, Winter 1995, Volume I, Number 1.

6. Quoted in The Chronicle of Philanthropy, November 15, 1994, p. 38.

SHERHERAZADE HIRJI

Director of Learning and Evaluation, Trillium Foundation, Toronto

Subscribe

Weekly news & analysis

Staying current on the Canadian non-profit sector has never been easier

This field is for validation purposes and should be left unchanged.