info@FurtherTheWork.com  tel: 510.243.0122  fax: 510.243.0132
Newsletter banner
Volume I  Issue 4 August 2009
In This Issue
Meaningful Measures: Learning Through Intentional Inquiry
Evaluation at Work: Spotlight on Oasis For Girls
Key Terms in Outcome-Based Evaluations
Next Steps: Resources for Outcome-Based Evaluations
Our Top Tips for Evaluators
Help Advance the Cause of Social Justice!
About Us
Check Out Our New Newsletter Archive
We've Got E-mail
Quick Links
Join Our Mailing List

bonsai

Dear Rebecca,
 
Maybe I was an unusual kid, but kindergarten was hardly a graham-crackers-and milk kind of experience for me. I still shiver at the memory of my kindergarten teacher - Mrs. Lackey, an entirely formidable presence, smacking her 12-inch ruler against her hip as she strode along the rows of petrified children quaking in our pint-sized chairs. "You colored outside the lines!" she thundered (at least, so it seems in my memory), as she reviewed the results of my early efforts. "You didn't follow the rules!" I was a wash-up, an academic has-been, all before the age of 6.

Well, from my perspective decades later, I find it hard to believe that Mrs. Lackey was ever as draconian and intimidating as she appeared to my five-year-old self. But what resonates still -- and I'm pretty sure this isn't unique to me -- is the anxious response to the thought of an external (and perhaps unsympathetic) judge determining the "success" of my work.

Yet it's also true that evaluations -- whether formal or informal, personal or professional, internal or external -- are a fact of life. The question, then, becomes, how best to benefit from them; how to use evaluations as opportunities for improvement: to help you chart your course, not judge you after you've arrived.

In this issue of Furthermore..., we explore the concept of evaluation as it applies to the nonprofit sector. With the economy in a state of shock and resources ever-tighter, well-conceived and ongoing evaluation practices can actually be your friend (I promise!), not your foe, helping you understand your work, assess its results, modify its course, and demonstrate its value.

Simply put, good evaluation practices and processes can help you answer five basic questions:
  • Who are we helping?
  • How are we helping?
  • How much does our help cost (not just in dollars)?
  • Why are we doing this?
  • How will we determine how well we're doing?
Please join our resident evaluation guru, Joyce Lee-Ibarra, MPH, in her article offering a primer on evaluations: why they exist, how you can use them, and how to turn them into friendly allies, rather than unrelenting judges.

And check out the links to additional resources, in the Next Steps section, below.
 
As always, we welcome your comments and questions. E-mail us any time, at info@furtherthework.com.
 
All the best,

Rebecca's signature

Rebecca Brown, MA, CFA, CFRE
President, Further The Work
Bringing for-profit resources to the nonprofit world
Meaningful Measures: Learning Through Intentional Inquiry
by Joyce Lee-Ibarra, MPH, and Rebecca Brown

Few words strike fear in the hearts of nonprofit leaders more than the dreaded term "evaluation." Whether for its seemingly technical nature or its specter of external judgment, evaluation is often viewed as an unpleasant -- if increasingly unavoidable -- task imposed by outsiders (let's just call them "funders").
Blue arrows
But it doesn't have to be this way. In the end, evaluation can be your friend, helping you understand your work, increase its efficiencies and effect, and maximize your ability to fulfill your organizational mission while pleasing your funders at the same time.

So, Just What Is Evaluation, Anyway?
The definition of "evaluation" is deceptively simple: Evaluation is the process of gathering relevant information to support good decision-making.

But as with so many "simple" things, of course, the devil is in the details. The challenge with evaluation is to clearly identify what you hope to find out, and then focus on developing manageable mechanisms that allow you to answer those questions, while still allowing room for surprises and reconsiderations.

An Overview of Evaluation

Traditional approaches have often understood evaluation as a one-time event: an after-the-fact judgment assessing success or failure. (See Rebecca's article about the infamous Mrs. Lackey, above.)

Increasingly, though, evaluation is being recognized as an ongoing, recursive process providing dynamic opportunities for an organization's advancement and efficiency. Rather than a summary exercise imposed by an outside "expert" who assigns a final, determinative grade, current approaches emphasize evaluation as an ongoing, internal practice of organizational self-improvement. An organizational culture that values evaluation as a learning process (rather than as a static, retrospective judgment) greatly increases the likelihood that results will be translated into beneficial action.

Key Benefits of Evaluation
All that sounds fine, of course, but what can evaluation really do for you? What about all that jargon evaluators fling about? Well, here's a primer to help you demystify evaluation.

At the most basic level, there are two evaluation types:
 
1. Formative evaluations strengthen or improve an existing program (or maximize the design of a new program) by identifying what you want to accomplish, designing a proposed method to accomplish that goal, and then assessing the quality and efficiency of its implementation. Formative evaluation can include several subordinate evaluation mechanisms:
  • Needs assessment determines who needs the program, the extent of the need, and possible solutions to meet the need.
  • Process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures.
  • Implementation evaluation monitors the fidelity of the program operation, examining how well the program followed its intended methods.
2. Summative evaluations examine the effects or outcomes of a program: they describe the change that has occurred as a result of the program. Like formative evaluation, summative evaluation can also be further divided:
  • Outcome evaluations investigate the program's demonstrable effects.
  • Impact evaluation is broader, assessing the overall effects -- intended or unintended -- of the program.
  • Cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs and values.
Output vs. Outcome: Mistaking Activities for Results
Despite the potential variety of evaluation approaches and reasons, process evaluations -- perhaps because they tend to be easy to conceive and to implement -- are the form most commonly used by small-to-medium nonprofit organizations.

A process evaluation might measure how many people participate in your programs, analyze their demographic mix, and quantify how many hours you spend serving them, how many classes they attend, or how many brochures or classes or counseling sessions you produce. In other words, a process evaluation documents program inputs (the resources you used); activities (the methods of service delivery); and outputs (how many of what thing you offered to whom).

Undertaking a process evaluation can be a good way for a nonprofit organization to take its first step in developing an evaluation approach and capacity. But while it can be a great place to start, process evaluation should not be mistaken for outcome evaluation.

Here's where the jargon can be destructively deceptive: Often, nonprofit leaders and staff make the mistake of confusing "outputs" with "outcomes."

This distinction is not just a question of jargon. An "output" refers to something you've provided to a client (a workshop, an hour of case management, a night in a homeless shelter, or a meal) -- outputs capture the service you've provided, but they don't measure the effect you've made by providing that service. That's where the question of "outcomes" comes in. Outcome evaluation measures the differences you make.

As one organization expresses it, what matters "is not how many worms the bird feeds its young, but how well the fledgling flies." [United Way of America, 2002]

Yup, You Guessed Right: Measuring Outcomes is Just a Little Harder...
But It Doesn't Have to Hurt

It's no surprise that process measures are the ones most commonly found in small nonprofit organizations: they're the easiest ones to develop and implement. And in many cases (particularly with government contracts), they're all that anyone asks for.

For example, government contracts might require you to measure your units of service (how many clients you serve, through specific mechanisms, how often, and for how long). These are process measures that reflect how you are going about your work. Although you hope that they will facilitate change in your clients' behavior or knowledge, they do not measure change itself.

Increasingly, though, nonprofit leaders and funders are asking for more than just process measures: they're looking for evidence of the impact of the work. This is where outcome-focused evaluation comes in. Outcome evaluations attempt to perceive and measure the changes in behavior, knowledge, or values that have resulted from your work.

Funders and Nonprofits, Working Together in Tough Times
In this climate of economic uncertainty and limited funding, funders and grantees sometimes find themselves at odds with one another about the importance of evaluation. Funders may require more rigorous evaluation in an attempt to ensure that their funds are being used to maximum effect.  In contrast, grantees may want to use available funding to maintain ongoing programs rather than devote scarce resources to evaluation.

Ultimately, though, the most effective evaluations are those that bridge the gap between funders and grantees. For example:
  • Funders can choose to scale the requirements of an evaluation to fit the size and budget constraints of their grantees; or better yet, provide funds specifically to cover the staff time and monetary cost of conducting an evaluation
  • Grantees, otherwise, can seek to carve out a percentage of their budgets to address the evaluation needs of their funders.
Regardless of the specific compromises made, an open dialogue between funders and grantees regarding their respective needs is the best way to ensure the final evaluation is feasible, informative, useful, and practical for both.
Evaluation at Work: A Spotlight on Oasis For Girls
By Joyce Lee-Ibarra, MPH, Director of Grants Services, Further The Work
 
Founded in 1999 to serve at-risk girls in San Francisco, Oasis For Girls (sfoasis.org) is a multi-disciplinary, arts-based youth-development organization. Oasis' array of programs and activities serve immigrant and low-income girls and young women of color, ages 11-24, throughout San Francisco.

In the summer of 2008, Oasis undertook Measure Up, a nine-month evaluation project to strengthen the organization's work and optimize its benefit to its youthful clients.

Originally intended as a client-satisfaction and needs-assessment effort to be conducted primarily by external evaluators, Measure Up evolved into a youth-driven research and evaluation project.  The goal of Measure Up is to further Oasis' understanding of its own programs and organizational functioning in fulfilling its mission to support leadership development in young women.

Six young women -- all of them Oasis clients or alumnae -- were chosen as the project's managers and research staff. Working in teams, and with the support of adult staff, they prepared and conducted surveys, focus groups, and interviews; analyzed the collected data; and prepared and presented a final report on their findings.

Measure Up provided Oasis powerful insight into both its internal effectiveness and the needs of the community it serves. And results of the project were readily translated into action plans.  For example, Measure Up identified a strong client interest in additional technology and media training. In response, Oasis will work to expand its computer lab and develop its technology training services.

As an organization, Oasis prides itself on constant learning and growing, and "Measure Up reflects our organization's culture," said Jessica van Tuyl, Interim Executive Director. "It helped assure us we were moving in the right direction...[while] it also helped us identify new areas of interest for our girls."
 
Beyond that, Measure Up was itself a vehicle for furthering the mission of Oasis For Girls. By developing the team members' skills, Measure Up inspired and empowered the young women to seek ways to better their own community through evaluation and research.
 
Key Terms in Outcome-Based Evaluations
 
1. Outcomes: Outcomes are the actual impacts, benefits, or changes experienced by participants during or after your program. They may measure changes in behavior or in knowledge, but the key notion is change. For example, in a substance-abuse program, an intended outcome might be reductions in participants' use of drugs or alcohol. This type of measure is very different from an output evaluation, which might measure the number of people who went through various elements of the program, but would not measure changes in participants' knowledge or behavior.

2. Outcome Targets: Outcome targets quantify the goals that you hope participants will achieve toward the desired outcomes. Building on the previous example, one of your goals might be to reduce the rate of substance abuse by 50% for 75% of the program's participants. That's an outcome target.

3. Outcome Indicators: Outcome indicators are the ways in which you will measure desired changes in participants' knowledge or behavior. Outcome indicators are observable and measurable "milestones" toward an outcome target. So, to continue our example, you might measure your program's success in reducing excessive use of drugs or alcohol by tracking how many of your participants overdose, or by their self-reported rates of use, or even by the changes in their self-reported desire to reduce their use of drugs or alcohol.

As you can imagine, finding the right indicator can sometimes be the biggest challenge. After all, if you're interested in changing behavior, you can't follow your clients around all day to see what they're doing differently. But you can conduct pre- and post-program surveys, for example, asking them to report on their rates of use of drugs or alcohol, the types of use, and their attitudes towards their use. All of these are legitimate outcome indicators. 
Next Steps: Resources for Outcome-Based Evaluations
 
Whether you're getting ready to conduct your first-ever program evaluation or you have years of experience, there is an abundance of useful, accessible (and often free!) information available on the Internet.  Some of the best resources we've found include:
 
1. FSG Social Impact Advisors (fsg-impact.org) is a nonprofit organization that works with foundations, corporations, governments, and nonprofits to accelerate the pace of social progress by providing advice, disseminating research, and developing initiatives designed to strengthen the field.
  • For foundations: FSG produces a great report on emerging approaches to help Foundations develop good evaluations. To review the whole array, click here.
  • For nonprofit Boards and leadership: FSG produces a terrific, accessible resource kit for nonprofit leaders. To download it free, click here.
2. Indiana University-Purdue University Indianapolis and the Institute of Museum and Library Services have teamed up to offer a fabulous, on-line resource called Shaping Outcomes. While Shaping Outcomes focuses on increasing capacity for museum or library projects, its approach could be used to create an evaluation plan for any program.
  • Its step-by-step curriculum materials are available to anyone, at no cost; click here.
  • Shaping Outcomes is also offered as an on-line course, conducted by an instructor, at a cost of $150 per participant or work-group. The next four-week session begins September 9, 2009. To go to the registration and information page, click here.
3. The James Irvine Foundation
  • You can find a wonderful introduction to evaluations, here.
  • The Irvine Foundation also offers an array of downloadable publications examining evaluation in greater depth. 
Our "Top Tips" for Evaluators
by Lori Mattern, Marketing and Communications Manager

Engage key stakeholders: Engaging your funders is pretty much a no-brainer, since more often than not they require some kind of evaluation. But there are other key groups you need to involve. Feedback and suggestions from program staff, clients, constituents and board members should be encouraged. And once the evaluation report is complete, make sure that staff have a chance to review and discuss it. After all, they're the ones that will put the lessons into practice.

Make sure you have enough resources to do the job: In developing a program evaluation, focus on what you really need to know. Use the staff time and financial resources wisely. 

Collect both quantitative and qualitative data: "Quantitative" data, such as ratings and rankings, can be relatively easy to gather and provides breadth for your evaluation. But such data doesn't tell the whole story. Be sure to balance it with "qualitative" data gathered from interviews, focus groups, and the like; this can give evaluations depth and insight.

Use the results to further refine future evaluations: Remember that evaluation is an ongoing process. Think of it as a feedback loop, with each set of findings informing the content and focus of future evaluations.

Accept reality: No matter how much time you spend on it or how hard you try, your evaluation will never be perfect. Don't let "perfect" be the enemy of "good."

Allow yourself to consider lessons learned: No matter how well-conceived your program is, you're bound to have at least a few failures. They can tell you lots about your program's effectiveness, and funders and other partners generally welcome your honest self-reflection.

Share your results: Even if the evaluation didn't turn out quite as well as you expected, don't hide the results. Make sure all key stakeholders have access to the evaluation report. It will help management and the board to reflect on the organization's programs, goals, and efficacy. 
 
You Can Help Advance the Cause of Social Justice!
 
Practicing what we preach -- the importance of gathering and analyzing information to improve program design and delivery -- Further The Work has recently launched its first clients' and stakeholders' survey. Our purpose is to better understand the needs, concerns, and interests of all who work to advance social justice in local at-risk communities. The results will help inform us of how we can best help you further your work.

Please take five or ten minutes to participate in this survey. Your input is very important to us and is greatly appreciated.

Click here to take the survey.

In addition, we encourage you to forward this newsletter, including the survey link, to your colleagues who work or volunteer in nonprofits and educational institutions, government agencies, private foundations and corporate philanthropy programs. We'd like to hear from a broad array of community members.

Forward to a Friend

We look forward to sharing our survey results with you in a future issue of Furthermore...
 
About Us

Further The Work maximizes the capacity and efficacy of nonprofit, educational, and philanthropic organizations that are working for the greater good. We accomplish this mission by providing strategic planning, project management, and fundraising & marketing services to clients throughout Northern California.
Check Out Our New Newsletter Archive!
 
A must-read for everyone concerned with evaluation is our July 24th E-bulletin covering an important new report entitled "Breakthroughs in Shared Measurement and Social Impact."  

Published by FSG Social Impact Advisors and funded by The William and Flora Hewlett Foundation, this exciting research examines how emerging web-based information-management systems can advance the development of coordinated and integrated multi-sector systems for evaluating the impact of social intervention programs. 

You can now find all archived copies of past issues of Furthermore... and Further the Work E-bulletins on our website. Just click here,  and away you'll go! 
We've Got E-mail
 
Got comments, compliments, or critiques? Want us to add a resource to our Resources page? Let us know. E-mail your thoughts to info@furtherthework.com.

We'll print excerpts in future issues of Furthermore...
Forward to a Friend