Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
I am proud to announce a new Win-Vector LLC statistics video course:
Campaign Response Testing
John Mount, Win-Vector LLC
This course works through the very specific statistics problem of trying to estimate the unknown true response rates one or more populations in responding to one or more sales/marketing campaigns or price-points. This is an old simple solved problem. It is also the central business problem of the 21st century (as so much current work is measuring online advertising response rates).
Nina Zumel helped me out by supplying an complete implementation as a R Shiny worksheet!
To me the problem and course are kind both of fun.
For each sales/marketing campaign we are trying to measure response rate. We attempt this by taking measurements from already run sales campaigns. We ask the user for a mere post-it note worth of summaries, for each campaign:
- The number of “actions” or items sent.
- The value of a success.
- The number of successes resulting.
We then use a Bayesian method to show the user the actual posterior distributions of the unknown true population response rates conditioned on the supplied evidence.
For example if the user gives us the following data:
Label | Actions | Successes | ValueSuccess | |
---|---|---|---|---|
1 | Campaign1 | 100.00 | 1.00 | 2.00 |
2 | Campaign2 | 100.00 | 2.00 | 1.00 |
The worksheet gives the following graph:
The set-up and interpretation of the graph (and some accompanying result tables) is the topic of the video course. Two quick call outs though:
- The worksheet computes and reports to the user the posterior odds of each campaign having come from a higher value population.
- Notice the second campaign (the one that had one success valued at $2) has more mass above the black dashed line at x=$0.05/action. This means even though both campaigns are likely only worth $0.02 per action each the rarer campaign still has a bit more chance of having come from a high-value population.
Because the approach is Bayesian we get nice things like credible intervals and fairly direct answers to common business questions (such as: “How much money is at risk in the sense of the probability of picking the wrong campaign times the expected value lost in picking the wrong campaign?”). With everything wrapped in an interactive worksheet the user no longer needs to care if Bayesian methods are harder or easier than frequentist methods to implement (as the implementation is already done and wrapped).
The method is standard: we compute the exact posterior distributions of the unknown true population response rates assuming the uninformative Jeffreys prior. We distribute the online worksheet and the source code freely (under a GPL3 license). If you know enough statistics and R-programming you can work with these without our help, and should be good to go. If you want some explanation and training on how to properly use the worksheet (what questions to form, how to encode them in the sheet inputs, and how to look at the results) we ask you purchase the course as a directed explanation and teaching of the method (or perhaps as a “thanks”).
We could make more comparisons with the more common frequentist platforms (hypothesis testing, significance, p-values, and power calculators)- but that is too much like the mistake of trying to introduce the metric system by explaining meters in terms of feet instead of introducing meters as a self sufficient unit of distance (what happened in the United States in the 1970s).
Because more and more of us have a direct sales/marketing part of our jobs (for example selling books and subscriptions to Udemy courses!), more and more of us are forced to worry about the above sort of calculation.
To introduce this new course we are, for the time being, offering the following half-off Udemy coupon-code: CRT1. We suggest you check out the free promotional video to see if this course is the course for you (promotional video accessed by clicking on course image).
Related posts:
- Bad Bayes: an example of why you need hold-out testing
- One place not to use the Sharpe ratio
- Skimming statistics papers for the ideas (instead of the complete procedures)
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.