Getting Started: Adobe Analytics Clickstream Data Feed

[This article was first published on randyzwitch.com » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

“Well, first you need a TMS and a three-tiered data layer, then some jQuery with a node backend to inject customer data into the page asynchronously if you want to avoid cookie-based limitations with cross-domain tracking and be Internet Explorer 4 compatible…”

Blah Blah Blah. There’s a whole cottage industry around jargon-ing each other to death about digital data collection. But why? Why do we focus on tools, instead of the data? Because the tools are necessarily inflexible, so we work backwards from the pre-defined reports we have to the data needed to populate them correctly. Let’s go the other way for once: clickstream data to analysis & reporting.

In this blog post, I will show the structure of the Adobe Analytics Clickstream Data Feed and how to work with a day worth of data within R. Clickstream data isn’t as raw as pure server logs, but the only limit to what we can calculate from clickstream data is what we can accomplish with a bit of programming and imagination. In later posts, I’ll show how to store a year worth of data in a relational database, storing the same data in Hadoop and doing analysis using modern tools such as Apache Spark.

This blog post will not cover the mechanics of getting the feed delivered via FTP. The Adobe Clickstream Feed documentation is sufficiently clear in how to get started.

FTP/File Structure

Once your Adobe Clickstream Feed starts being delivered via FTP, you’ll have a file listing that looks similar to the following:

adobe-clickstream-data-ftp

What you’ll notice is that with daily delivery, three files are provided, each having a consistent file naming format:

  1. d+-S+_d+-d+-d+.tsv.gz

    This is the main file containing the server call level data

  2. S+_d+-d+-d+-lookup_data.tar.gz

    These are the lookup tables, header files, etc.

  3. S+_d+-d+-d+.txt

    Manifest file, delivered last so that any automated processes know that Adobe is finished transferring

The regular expressions will be unnecessary for working with our single day of data, but it’s good to realize that there is a consistent naming structure.

Checking md5 hashes

As part of the manifest file, Adobe provides md5 hashes of the files. There are at least two purposes to this, including 1) making sure that the files truly were delivered in full and 2) that the files haven’t been manipulated/tampered with. In order to check that your md5 hashes match the values provided by Adobe, we can do the following in R:As we can see, both calculated hashes are contained within the manifest, so we can be confident that the files we downloaded haven’t been modified.

Unzipping and Loading Raw Files to Data Frames

Now that our file hashes are validated, it’s time to load the files into R. For the example files, I would be able to fit the entire day into RAM because my blog does very little traffic. However, I’m going to still limit the rows brought in, as if we were working with a large e-commerce website with millions of visits per day:

If we were to be loading this data into a database, we’d be done with our processing; we have all of our data read into R and it would be a trivial exercise to load the data into a database (we’ll do this in a separate blog post). But since we’re going to be analyze this single day of clickstream data, we need to join these 14 data frames together.

SQL: The Most Important Language for Analytics

As a slight tangent, if you don’t know SQL, then you’re going to have a really hard time doing any sort of advanced analytics. There are literally millions of tutorials on the Internet (including this one), and understanding how to join and retrieve data from databases is the key to being more than just a report monkey.

The reason why the prior code creates 14 data frames is because the data is delivered in a normalized structure from Adobe. Now we are going to de-normalize the data, which is just a fancy way of saying “join the files together in order to make a gigantic table.”

There are probably a dozen different ways to join data frames using just R code, but I’m going to do it using the sqldf package so that I can use SQL. This will allow for a single, declarative statement that shows the relationship between the lookup and fact tables:There are three lookup tables that weren’t used: color_depth, plugins and event. The first two don’t have a lookup column in my data feed (click link for a full listing of Adobe Clickstream data feed columns available). These columns aren’t really useful for my purposes anyway, so not a huge loss. The third table, the event list, requires a separate processing step.

Processing Event Data

As normalized as the Adobe Clickstream Data Feed is, there is one oddity: the events per server call come in a comma-delimited string in a single column with a lookup table. This implies that a separate level of processing is necessary, outside of SQL, since the column “key” is actually multiple keys and the lookup table specifies one event type per row. So if you were to try and join the data together, you wouldn’t get any matches.

To deal with this in R, we are going to do an EXTREMELY wasteful operation: we are going to create a data frame with a column for each possible event, then evaluate each row to see if that event occurred. This will use a massive amount of RAM, but of course, this is a feature/limitation of R which wouldn’t be an issue if the data were stored in a database.

With the final cbind command, we’ve created a 500 row x 1562 column dataset representing a sample of rows from one day of the Adobe Clickstream Data Feed. Having the data denormalized in this fashion takes 6.13 MB of RAM…extrapolating to 1 million rows, you would need 12.26GB of RAM (per day of data you want to analyze, if stored solely in memory).

Next Step: Analytics?!

A thousand words in and 91 lines of R code and we still haven’t done any actual analytics. But we’ve completed the first step in any analytics project: data prep!

In my next blog post in this series, I’ll demonstrate how to actually use this data in analytics, from re-creating reports available in the Adobe Analytics UI (to prove the data is the same) to more advanced analysis such as using association rules, which can be one method for creating a “You may also like…” functionality such as the one at the bottom of this blog.

Files:
http://randyzwitch.com/wp-content/uploads/2015/08/zwitchdev_2015-07-13.txt
http://randyzwitch.com/wp-content/uploads/2015/08/zwitchdev_2015-07-13-lookup_data.tar.gz
http://randyzwitch.com/wp-content/uploads/2015/08/01-zwitchdev_2015-07-13.tsv.gz

To leave a comment for the author, please follow the link and comment on their blog: randyzwitch.com » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)