Because funders are looking for concrete evidence of results, hard data — quantitative information that can be tallied or measured — rules in the world of grants. “Sure, we want to be able to prove that our work is producing measurable, positive change,” said Barbara Floersch, executive director of The Grantsmanship Center, in Los Angeles, Calif.
“But unless we collect, analyze, and value soft data as well, we may not know why our approach is or is not working. Soft data are subjective and reflect the experiences and feelings of program participants and beneficiaries,” she said.
Suppose you operate a tutoring program and project this solid, quantifiable outcome: Within six months, 75 percent of the 100 8th graders (75 students) who participate in tutoring twice a week will raise their grade point averages to 2.5 or higher.
But what if, at the end of the six-month period, only 40 percent of the participants (40 students) met the mark? What if an assessment of program implementation shows that 20 percent of the participants did not attend tutoring regularly, and that 15 percent dropped out altogether? “This is where soft data comes in,” said Floersch. “To improve this program and boost the positive outcome, staff members have got to understand why the students did and did not attend regularly, and why those who dropped out decided to leave.”
When planning a program evaluation, go beyond assessing the degree to which a measureable outcome is being achieved. Put plans into place for assessing why the approach is or is not working. “Soft data can be enlightening and are essential for continuous program improvement,” said Floersch. “Our job is to figure out how to make good things happen. Without soft data, we can’t do that.” © Copyright 2015 The Grantsmanship Center. All Rights Reserved.