Search

Performance Measurement: Learning to Improve

Think back to middle school. Do you remember the difference between the clear-cut, yes-or-no nature of solving a math problem and the messy, iterative effort of writing an English paper? In the conversations that we hear, nonprofit performance measurement is too often treated like a math problem—the program is either working or not. Period.

We think these conversations are off the mark. Performance measurement is more like writing an essay with multiple drafts. There are no wrong answers, especially for early drafts. The point is not perfection, but rather improving with each draft. In assessing your own performance measurement efforts, the primary question is not “‘are we doing it right?”, but rather “is this useful?”

Toward the end of last year, had a chance to draw this distinction with 60 leaders from Boston-area nonprofits funded by the Anna B. Stearns Foundation, ranging from affiliates of Big Brothers Big Sisters and the YWCA to neighborhood-based after-school programs. The main quandary in the room was how the organizations could improve their measurement practices with the limited resources at hand. We sought to help them with tactics and mindset.

We began by noting the differences between impact evaluation and performance measurement, with each aimed at different goals, using different tactics, tools and resources. [For more, see “Measurement as Learning,” by Jeri Eckhart-Queenan and Matt Forti and Working Hard & Working Well by David Hunter]

To boil down the literature: Evaluation grows out of social science, an academic discipline with peer-reviewed methodologies. Specifically, impact evaluation often seeks to attribute causation (e.g., ‘did this specific program cause that outcome?’), and often is intended for an audience beyond the nonprofit program being evaluated (policy-makers, funders, academics, practitioners more broadly).  Impact evaluations tend to make more sense for well-established program models, not experiments and start-ups.

Performance Measurement, in contrast, is a management discipline, closely related to continuous improvement and organizational learning. It seeks rapid, incremental improvements in programs and their execution, and thereby outcomes for participants.

For these reasons, Performance Measurement uses a very broad range of tools, including informal qualitative data, focus groups, short feedback surveys, staff meetings. It is dynamic, resulting in rapid, ongoing course correction—both for the program model and the organization itself. Performance Measurement is best performed by staff and leadership of an organization, so that information can quickly influence behavior. Accordingly, it can often be carried out with existing staff and processes (e.g., staff meetings, alumni events).

To help the organizations at our session zero in on the right measures, we asked them to create a logic model for a single program, then prioritize just five measures to learn whether their logic model was creating the desired outcomes. Though challenging, it helped the leaders identify a memorable list of measures that were most important.

As you look ahead to 2014, here are a few useful tactics to get started with your first draft of performance measurement:

  • With staff, define the logic model for a single program: who you will serve, with what dosage of which services, leading to what outcomes, both short and long term
  • Try a small-scale pilot:
    • Define a programmatic challenge (e.g., participant attrition)
    • Choose 1-2 pieces of relevant data to track systematically (e.g., daily attendance)
    • Discuss the data at your next staff or leadership meeting
    • Identify 2-3 potential solutions (e.g., financial incentives)
    • Pilot the improvements and reconvene
    • Look for relevant studies that can help you understand the link between short term and long term outcomes for programs similar to yours.

Nonprofit leaders often feel they need proof when it comes to measuring their programs. “Where are my statistically significant results?” In fact, the focus should be on learning, and using that learning to improve over time. In short, performance measurement implies a burden of action—not a burden of proof—to learn and improve.

Jeff Shumway is a manager with The Bridgespan Group. Colin Murphy is a Bridgespan associate consultant.

[show_bio]