Search

Name-Dropping: Donors Get Tired. Models Do, Too

Marketers know their house files — even their lapsed donor files — are among the most valuable sources of names for their campaigns. Karen Gleason, senior director at the American Diabetes Association (ADA), always banked on this knowledge. So when success rates for the organization’s reactivation campaigns began to stagnate, she assumed the cause was a factor other than the list.

Of course, the ADA wasn’t mailing its entire file. The organization relied on a reactivation model, a set of parameters used to select names based on their likelihood to re-up their donations. This raised an interesting question: Could the way the names were picked be failing?

As a test, the ADA ran a cold prospecting model against the lapsed donor file, and mailed a six-figure quantity of the new selection of names. While the actual criteria of the new selects is part of the ADA’s secret sauce, chances are it weighted the demographics of the file a little more heavily than it had in the past, and reduced the importance of donor history data.

Whatever the ADA did, it worked. Applying the prospecting model to the lapsed donor file resulted in an 80 percent lift in response rate, a 47-percent lift in net donation per donor, and a 33 percent decrease in the cost of every dollar raised.

True, the ADA saw a 15-percent decrease in average gift size, from $22 to $19, but the leap in response rate more than made up for the drop. As Megan Gibeau, vice president, client services for direct marketing agency NNE Marketing, Lexington, Mass., said to an audience at the Data & Marketing Association’s Nonprofit Federation Conference, “[ADA] will take a 15 percent decrease [in gift amount] in exchange for an 80 percent lift in reactivation.”

It turns out that models, just like creative packages, have lifespans, periods after which they are no longer effective. For some, these lifespans are very long. Others can be one- or two-test efforts and then done. The trick is knowing when these models need a refreshing have outlived their usefulness.

Another hazard is that an organization employs so many models that it either faces information overload, or the resulting segments are so small that implementing campaigns which optimize all of them would be cost inefficient.

It’s not difficult to imagine a scenario in which an organization loses track of the number of models it employs. If activities aimed at identified donors (retention, gift solicitation, and reactivation, for example) and prospecting are handled by different individuals, the number of models used can climb. And if an organization uses a co-op, the co-op might also run several models on its files.

Take a step back, advised John Ernst, vice president, solutions and insights at Niwot, Colo.-based marketing intelligence firm Wiland. Multiple models in and of themselves are not evil. If a given file is put through a model seeking to maximize different results — response rates, average gifts, lifetime value, for instance — it should yield different names. It’s when supposedly different models spew out the same names that the data users should get nervous.

Some marketers might rely on multiple modeled sources of names for a simple reason: They’re panicked about not finding enough individuals to solicit. The problem is that in their eagerness to find enough targets, they’ve agreed to net name terms which require them to pay prospect co-ops in excess of the value of what they get in return.

While Ernst was reluctant to offer hard and fast rules regarding when a given co-op is no longer worth the money, he suggested looking at a source when yields fall to around 50 percent.

That said, a marketer shouldn’t eliminate sources with high rates of duplication randomly. Ernst recommends a two-step approach: First, rotate sources, co-op and otherwise, especially among the ones that have high duplication rates. A marketer might discover that one offers a few more new names than another.

Second, run prior-campaign suppression programs so the same high-scoring yet non-responsive names from a given source aren’t solicited over and over.

The most radical of model users might want to go in another direction entirely. Chances are a really terrific prospect scores high in a couple of models – say, mail responsiveness and donation history. While another prospect might not have an extensive history of donations, recent purchase information might show a boost in spending activity.

That prospect might not come up in the top deciles of other models, but it might pop up in in the fourth or fifth deciles of several of them. In short, the first one is a solid giver, while the second is a rising star, based on recent behavior – one possibly worth a solicitation.