What is wrong with lift curves

[This article was first published on CYBAEA on R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

The first part of our Marketing Analytics Using R course covers campaign analysis with test- and control groups and campaign optimisation using lift curves and predicted responses. Among the many topics covered, we discuss what is wrong with lift curves. They are a standard tool in marketing to select a target group for a campaign based on predicted response propensity, but they way they are used is wrong, or at least sub-optimal.

Lift curves and the wrong way to use them

[Marketing Analytics Using R training on Halloween]

This time, the first day of the course was Halloween. But we didn’t have any special spooky data sets or exercises, just the usual programme of an introduction to marketing and analytics in marketing followed by an introduction to set up and evaluate direct marketing campaigns with practical exercises using the R platform for statistical computing and data visualisation.

Lift curves are often used in marketing. You first create a model that predict the probability that a given customer will respond positively to your campaign. A positive response usually means they buy something, but you might have other objectives such as getting more visits to your web site, have your customers refer friends to become your customers, or other outcomes. The model is created by considering the responses from previous campaigns or by comparing test and control groups from a trial. In the course we show how to create the model using generalised linear models (glm), decision trees (rpart, party/partykit), and ensembles of trees, primarily generalised boosted regression models (gbm) but also forests of trees (randomForest) (which we are somehow discouraged from calling Random Forests due to trademark problems). These are of course not all the model types you could consider, but we are limited by the duration of the course and in practise they work well for a wide class of problems. By consistently using the caret package we give the students a scalable toolkit for exploring other model approaches.

Once you have a model that predicts the probability of a positive response, you score your customer base (or the subset eligible for the campaign) and sort the list by the probability from high to low. The cumulative sum of probabilities to n gives you the expected sales from contacting n customers, and the line of this is the lift curve.

The way marketing people tend to use it is that they have a budget for N contacts, for example direct mailings, and they read off the curve how many responses they are going to get. Or if they need M responses (typically sales), they can find the number of contacts needed by starting on M on the y-axis and finding the corresponding N on the x-axis.

Simple.

And wrong.

The right way to do it

[Lost sales by traditional approach]
The lost sales from using the traditional lift curve approach; in this example it amounts to nearly 20% less sales in the worst case.

Or at least the approach is not optimal in the setting where the product you are selling (or the behaviour you are encouraging) is readily available to all. The problem is that some of the people who buy are people who would buy anyhow, without your marketing effort, so you are wasting some of your budget. You want to target the customers whose behaviour you are most likely to influence. It should not be a surprise that some of the people who purchased during your test campaign, the test campaign you are using to create the model, would have bought from you anyhow.

You have both a treatment and a control group and it seems a real shame not to use the data from both. The treatment group obviously gives you the model for the propensity to buy.

We teach our students to make a baseline model from the control group to predict who would buy without any stimuli. The difference in the probability from the two models is the net response probability and this is the number you want to sort from high to low to decide on your campaign list.

The two lessons are

  1. Never throw away good data. In this case, do not ignore the response from the control group.

  2. The outcome of marketing activities is to change human behaviour, so make sure you model the change from your efforts and realise that this is not the same as the overall behaviour. You are not the centre of the universe.

Learn more

If you want to learn more, consider signing up for our training course Marketing Analytics Using R or feel free to contact us for an informal conversation.

To leave a comment for the author, please follow the link and comment on their blog: CYBAEA on R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)