Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
In a previous post, I outlined how to load daily Adobe Analytics Clickstream data feeds into a PostgreSQL database. While this isn’t a long-term scalable solution for large e-commerce companies doing millions of page views per day, for exploratory analysis a relational database structure can work well until a more robust solution is put into place (such as Hadoop/Spark).
Data Validation < groan>
Before digging too deeply into the data, we should validate that data from the data feed in our database (custom database view code) matches what we observe from other sources (mainly, the Adobe Analytics interface and/or RSiteCatalyst). Given the Adobe Analytics data feed represents an export of the underlying data, and Adobe provides the formulas in the data feed documentation, in theory you should be able to replicate the numbers exactly:
The code snippet above shows the validation, and sure enough, the “two different sources” show the same exact values (i.e. differences are 0), so everything has been loaded properly into the PostgreSQL database.
Finding Anomalies For Creating Bot Rules
With the data validated, we can now start digging deeper into the data. As an example, although I have bot filtering enabled, this only handles bots on the IAB bot list but not necessarily people trying to scrape my site (or worse).
To create a custom bot rule in Adobe Analytics, you can use IP address(es) and/or User-Agent string. However, as part of data exploration we are not limited to just these features (assuming, of course, that you can map your feature set back to an IP/User-Agent combo). To identify outlier behavior, I’m going to use a technique called ‘local outlier factors’ using the Rlof package in R with the following data features:
- Distinct Days Visited
- Total Pageviews
- Total Visits
- Distinct Pages Viewed
- Pageviews Per Visit
- Average Views Per Page
These aren’t the only features I could’ve used, but it should be pretty easy to view bot/scraper traffic using these metrics. Here’s the code:
A local outlier factor greater than 1 is classified as a potential outlier. Here’s a visual of the lof scores for the top 500 worst scoring IP addresses (vegalite R graph code):
We can see from the graph that there are at least 500 IP addresses that are potential outliers (since the line doesn’t go below a lof value of 1). These points are now a good starting place to go back to our overall table and inspect the entire datafeed records by IP address.
But what about business value?
The example above just scratches the surface on what’s possible when you have access to the raw data from Adobe Analytics. It’s possible to do these calculations on my laptop using R because I only have a few hundred-thousand records and IP addresses. But this kind of ops work is pretty low-value, since unless you are trying to detect system hacking, trying to find hidden scrapers/spiders in your data to filter out just modifies the denominator of your KPIs it doesn’t lead to real money per se.
In the last post of this series, I’ll cover how to work with the datafeed using Spark, and provide an example of using Spark MLLib to increase site engagement.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.