Loading...

There Is No Conventional Creative Wisdom

This article on designing donation solicitations contains no truths.

That is, there are no absolutes. There are no hard rules about best practices, save for one: Test, test, test. And if someone comes to you bearing “accepted industry wisdom,” ask to see the test results that bear that wisdom out… and then create more test results of your own.

There are very good reasons why the fundraising arena has no absolute truths. People change. Environments change. And novel tactics become overused and ineffective. This is why examples of tests presented by Debbie Schneiderman, director of digital programs, Wildlife Conservation Society, and Jessica Bosanko, senior vice president of nonprofits consultancy M+R, contain contradictions. For instance, highlighting matching programs works — except when they depress response rate. Language matters — except when a striking image stimulates donor actions. Short copy is king — except among information-hungry respondents.

There’s another cautionary note that should accompany these suggested tests: Many of them are recent, and haven’t gone through a full year of renewals and additional donations. Organizations that treats new-member solicitations as break-even propositions — or even loss leaders — should suspend making a final judgment about any test’s efficacy until the new prospects have been solicited to renew, or make additional contributions.

In some cases, certain tests might not be appropriate. Take one test idea Schneiderman and Bosanko offered during a session at the recent 2014 New York Nonprofit Conference. Bosanko noted that President Obama’s election campaign did a test in which it asked for a $5 contribution, breaking from the higher amounts the campaign had originally solicited. The low-dollar ask created a 172 percent lift response rate compared to the control.

All well and good, but political donations are somewhat different than normal charity contributions. Political donations are a way of generating engagement, which hopefully translates to votes on Election Day. In that instance, the cost of soliciting and processing a $5 donation can be recouped with a break-even donation… because the campaign has a secondary goal.

Bosanko was on firmer ground when she presented tests that tweaked language within solicitations. In one online example, some recipients were urged to “donate now,” while others were asked to “become a member.” In this instance, the “become a member’ test pulled a 24 percent higher response rate. This approach shouldn’t be seen as universally effective, however. As Bosanko noted, much can depend on whether donors view a given organization as a membership-based organization.

Some tweaks may diminish in efficiency over time. Take personalized solicitations, in which recipients are asked, by name and in a header leading off the piece, to take an action. These currently work well, but as more and more organizations embrace this tactic, recipients may become inured to solicitations that use it.

Bosanko gave several examples of how language can be adjusted to reflect specific targets. The Human Rights Campaign, a civil rights group that focuses on sexual inequality, tested language that acknowledged recipients had previously identified themselves as straight allies of the gay community. She also suggested that — for organizations with the variable printing capabilities to do so — acknowledging previous actions could be an effective tactic.

Recognizing a recipient’s past actions, or information the recipient has offered, is well worth testing: When a conservation organization tested a solicitation acknowledging previous gifts, the test package tied the control… at least, as far as response rate was concerned. But the average gift amount among those receiving the test package was 30 percent higher.

In some cases, mail package components may spur different response rates based on the receiving audience. A lift note — an additional letter within a mail package — depressed response rates among prospects, but actually increased response levels among individuals who had donated previously, and who apparently wanted further information about the good works their donations had enabled.

There’s another potential benefit to eliminating lift notes, or cutting down the verbiage in solicitations in general: As Bosanko noted, shorter copy tends to have a fast-track approval process, while longer copy invites people at every level of approval to tweak it.

Bosanko ran through a series of tests regarding ask strings – the donation amounts found on response forms. As this is a key component of a fundraising letter, organizations should accept no conventional industry wisdom: These must be tested, tested, tested against the organization’s donor and prospect files.

But what to test? Descending strings – placing the largest suggested donation amounts at the left, and following them with progressively smaller levels – as opposed to ascending ones is one valid test. Circling a given amount, preferably slightly above the average standard donation, is another. And in some cases eliminating the string, and simply offering a slightly higher than average donation amount ($50, if the average donation is $33, for instance), with a space next to it allowing donors to fill in whatever amount they’d like is a worthwhile test.

Of course, all of these tests should be done at first with a representative, but not an overwhelming, randomized portion of one’s mail-out quantity (check with a marketing consultant for what constitutes a valid test). Knowledge is power, but it shouldn’t be acquired at the cost of blowing one’s revenue forecasts.