Iterative Evaluation For Program Implementation
Iterative Evaluation For Program Implementation

Tracking implementation of grant-funded activities and the degree to which those activities produce results is essential. Without solid evaluation, nonprofit managers can’t determine if their work is making a difference and funders can’t know whether their grant awards are good social investments. 

While evaluation plans are as varied as the programs they assess, many gauge only end-of-game results. “A post-game analysis is a wasted opportunity because it doesn’t enhance the effectiveness of program operations,” said Barbara Floersch, grants expert and author of You Have a Hammer: Building Grant Proposals for Social Change. “To fulfil the promise of its role, evaluation should guide our work, not simply provide a post-mortem once the grant funds are gone.” 

Because neither communities nor problems are static, steering programs toward peak impact requires ongoing intelligence about how people are feeling, what stakeholders are learning, and what’s changing. The most impactful evaluations are an ongoing, vital component of program implementation. Post-game analysis can bring new information to the field but cannot support rapid-response pivots to increase the effectiveness of an existing program. For ongoing, finger-on-the-pulse information, you’ll need an embedded, iterative evaluation process.

Iterative evaluation is a commitment to ongoing learning and adaptation that’s formalized in the program evaluation plan. It recognizes the dynamic nature of working within a community and engages beneficiaries, other stakeholders, and organizations with intersecting interests in defining data needs and establishing feedback loops. Because no one person or group represents all of those affected by or interested in a problem, iterative evaluation includes co-interpretation of incoming data and group decision-making to define lessons learned and needed changes in approach. 

Here are the basic steps of an iterative evaluation process.

  1. Develop the initial research questions that will yield hard, measurable data as well as qualitative data about perceptions and feelings. 
  2. Gather data in a way that is sensitive to the community, uses existing sources when possible, and yields a rich variety of information. 
  3. Share the data with stakeholders.
  4. Co-interpret the data so that diverse perspectives contribute to understanding what the information means. 
  5. Adapt the program approach or evaluation questions as needed. 
  6. Repeat steps one through five above.

The frequency with which you complete the assessment/adaptation process will depend on the program you’re implementing. Some high-octane programs with rich, point-in-time data sources may complete the process every month. Others may find quarterly assessments are best. Just be sure to identify the frequency in the evaluation plan and stick with it — unless that timetable turns out to be one of the things that needs an adjustment.