Site icon R-bloggers

How do we know that marketing works?

[This article was first published on CYBAEA on R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

How do we know that marketing works? It was National Poetry Day in Britain recently, and I do believe that poetry and imagination, and the wisdom and insights it can bring, has a place in business and corporate world. But wisdom isn’t knowledge; it does not immediately convince in the way that facts presented as a coherent story does. So when I am asked “how do we know marketing works?” the question usually means “how do we collect facts and present them in a persuasive narrative that will convince the board?”

My background is the hard science of experimental particle physics at CERN where nothing is discovered unless you have replicated the experiment with at least five sigma significance. Max Planck is one of the founders of this science, so when he says that experiment is the only means of knowledge I am with him. It is in my DNA.

More information

The beauty of Marketing is that in many, many cases we can indeed experiment and gain knowledge that way. We go through all the basics both concepts and practical with hands-on exercises on real data in our Marketing Analytics Using R training course.

Consider a direct marketing campaign, that is, a message or a series of messages directed to identified individuals with the objective of changing their behaviour. It could be delivered classically as addressed postal letters, emails, telephone calls, and so on, or as targeted web site promotions or social media campaigns but to identified users. The key is that you know who gets to see the message (at least initially; we’ll leave the effect of sharing and viral campaigns to the course). This is sometimes known as ‘below the line’ (BTL) advertising.

This is different from what we might call advertising, that is broad messages to groups of individuals. You might think classical TV, radio, or newspaper ads, or broad banner ads, google ads, paid-for blog posts, and so on. While the target group may be identified, e.g. by the TV they watch, magazines they read, or web sites they visit, the individual is not known, only the aggregate group characteristics. Old school marketing managers may call this ‘above the line’ (ATL) advertising.

For direct marketing the key concept is that of test and control groups. If you come from a digital marketing background then you may know this as ‘A/B testing’. The basic idea is that you generate your target list of individuals to whom you want to send your message and then take a ramdom sample of that list away as your control group to which you do not communicate. If the two groups are really comparable in terms of the types of individuals in them, and they should be if you selected a true random, unbiased sample, and the only difference in your treatment of the two groups is sending or not sending the marketing message, then any difference in future behaviour, such as different spending patters, must be down to your campaign. This is science, this is experiment, and Planck would be proud. There are some subtleties to make sure you select the groups right (such as no peeking of the results during the experiment) but not much. It is simple, it is effective, and it works.

If you are not using robust test and control groups for your direct marketing campaigns, then start now. Today. Get measuring. Then stop all the campaigns that produce a negative result.

Yes, negative result; campaigns that make customers less likely to buy, more likely to defect to the competition, or more needing of (expensive and unproductive) support from you. We call it Stop Doing Stupid Stuff. At client after client the quick win has more often than not been to Stop Doing Stupid Stuff. Don’t beat yourself up if you have some campaigns with negative returns; I can assure you that you are not alone. It is very easy to become so hung up on your great ideas about what should work that you forget to question if it really does work. Imagination can be useful, but to enhance our understanding of what is real, not instead of it. Just figure out today what real and what is imagines, and then stop the campaigns with negative outcomes. Get the facts.

In our course syllabus we first establish the principles of test and control groups, then we introduce lift curves and consider the difference between gross and net responses before moving on to modelling campaign profitability and optimising your marketing spend: when you have multiple campaigns to run, which one(s) do you execute on which targeted individuals to maximise your profits (or other business objectives)?

< aside class="callout">

“Understanding the reasons for churn, devising the right (effective) churn prevention offers, and measuring (and continue to measure) their effectiveness is usually by far the hardest part of churn management.”

By comparison, the actual churn model is usually quite easy to develop.

Using the same principles we cover churn models to address the business problem of customers leaving you. Some people get confused: what does test and control groups have to do with churn? After all, you don’t choose who will leave you. But then you are doing churn modelling wrong. You typically do not want to predict churn, as in the probability that an individual will leave you. Say you did do this model, and say it was a great model. Now what are you going to do? The model does not directly address the possible business actions or the decisions that the business need to make. Typically, what you want to do is to offer some sort of incentive to say to some of the customers who are likely to leave. So the decision is: for each individual customer, given the range of possible retention incentives that I have at my disposal, would offering one of these incentives at this time to this customer increase my overall profitability? You might frame this question as ‘what is the expected change in customer lifetime value from presenting each of these offers at this time?’ which is something that you can usefully model. You need the cost of the offers, the propensity to accept it, the baseline customer lifetime value, and the change in spend and tenure or loyalty given the offer.

Sure, it is more complicated, but it is useful in a way that the simple ‘Will this customer churn?’ model is not. We wrote more on commercial churn modelling and we cover it in practical detail in the Marketing Analytics Using R training course.

Our philosophy is simple. The focus is always, or should be, on the business actions. Models do not make money, changing the way you do business may. Start with the end in mind, measure the right outcomes, and directly model the required business decisions, and you should be fine. But it is easy to get absorbed in the data and modelling and we have plenty of examples of that. Business understanding is key to successful Marketing Analytics.

We do not usually go into details on the statistics. Typically in the problems with which we deal in Marketing Analytics, we have thousands if not millions of records. Thousands of customers, tens or hundreds of thousands of purchases or transactions, or millions of web site visits. A signal from thousands of observations will usually be statistically significant, at least in the naive sense, and with an emphasis on experimentation that is normally good enough. So we spend more time on feature selection and model optimisation.

Feature selection is important on the wide data sets that are typical of our problems, and creating the right (derived) features is very often the key to developing a successful model. The small response rates and wide data sets means that we often use modern modelling techniques that are extremely powerful but may make interpretation more difficult. We compare simple and complex models in the course and show how to optimise feature selection and model performance using the caret package and framework. (The course instructor is an acknowledged contributor to the caret package.)

Marketing Analytics Using R training course

New dates announced: 31 October – 04 November 2016.

Learn more about our Marketing Analytics Using R training course. You can also Contact us to register your interest and receive more information on this course, and we will let you know the next time we run a public class. If you have several colleagues who are interested then we can also run a course just for you.

To leave a comment for the author, please follow the link and comment on their blog: CYBAEA on R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.