Bulk Downloading Adobe Analytics Data

This blog post also serves as release notes for RSiteCatalyst v1.4.9, as only one feature was added (batch report request and download). But it’s a feature big enough for its own post!

Recently, I was asked how I would approach replicating the market basket analysis blog post I wrote for 33 Sticks, but using a lot more data. Like, months and months of order-level data. While you might be able to submit multiple months worth of data in a single RSiteCatalyst call, it’s a lot more elegant to request data from the Adobe Analytics API in several calls. With the new batch-submit and batch-receive functionality in RSiteCatalyst, this process can be a LOT faster.

Non-Batched Method

Prior to version 1.4.9 of RSiteCatalyst, API calls could only be made in a serial fashion:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
library(RSiteCatalyst)
library(dplyr)

SCAuth(Sys.getenv("USER", ""), Sys.getenv("SECRET", ""))

combined_orders <- data.frame()
for(d in seq(as.Date("2016-06-01"), as.Date("2016-06-30"), by = "day")){

  d_ <- as.character(as.Date(d, origin = "1970-01-01"))
  print(d_)

  order_details <- QueueRanked(
    reportsuite.id = "reportsuite",
    date.from = d_,
    date.to = d_,
    elements = c('evar13', 'product'),
    metrics = c('revenue', 'units', 'orders'),
    top = c(50000, 10000),
    interval.seconds = 60
  )

  order_details$order_date <- d_
  combined_orders <- rbind.fill(combined_orders, order_details)
  rm(order_details)

}

The underlying assumption from a package development standpoint was that the user would be working in an interactive fashion; submit a report request, wait to get the answer back. There’s nothing inherently wrong with this code from an R standpoint that made this a slow process, you just had to wait until one report was calculated by the Adobe Analytics API until the next one was submitted.

Batch Method

Of course, most APIs can process multiple calls simultaneously, and the Adobe Analytics API is no exception. Thanks to user shashispace, it’s now possible to submit all of your report calls at once, then retrieve the results:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
queued <- list()
for(d in seq(as.Date("2016-06-01"), as.Date("2016-06-30"), by = "day")){
  d_ <- as.character(as.Date(d, origin = "1970-01-01"))

  print(d_)

  reportid <- QueueRanked(
    reportsuite.id = "reportsuite",
    date.from = d_,
    date.to = d_,
    elements = c('evar13', 'product'),
    metrics = c('revenue', 'units', 'orders'),
    top = c(50000, 10000),
    interval.seconds = 1,
    enqueueOnly = TRUE
  )

  queued <- cbind(queued, reportid)

}

queued_df <- data.frame()
for (i in queued){
  queued_df <- bind_rows(queued_df, GetReport(i))
}

This code is nearly identical to the serial snippet above, except for 1) the addition of the enqueueOnly = TRUE keyword argument and 2) lowering the interval.seconds keyword argument to 1 second instead of 60. When you use the enqueueOnly keyword, instead of returning the report results back, a Queue* function will return the report.id; by accumulating these report.id values in a list, we can next retrieve the reports and bind them together using dplyr.

Performance gain: 4x speed-up

Although the code snippets are nearly identical, it is way faster to submit the reports all at once then retrieve the results. By submitting the requests all at once, the API will process numerous calls at once, and while you are retrieving the results of one call the others will continue to process in the background.

I wouldn’t have thought this would make such a difference, but retrieving one month of daily order-level data went from taking 2420 seconds to 560 seconds! If you were to retrieve the same amount of daily data, but for an entire year, that would mean saving 6 hours in processing time.

Keep The Pull Requests Coming!

The last several RSiteCatalyst releases have been driven by contributions from the community and I couldn’t be happier! Given that I don’t spend much time in my professional life now using Adobe Analytics, having improvements driven by a community of users using the library daily is just so rewarding.

So please, if you have a comment for improvement (and especially if you find a bug), please submit an issue on GitHub. Submitting questions and issues to GitHub is the easiest way for me to provide support, while also giving other users the possibility to answer your question before I might. It will also provide a means for others to determine if they are experiencing a new or previously-known problem.


Adobe Analytics Clickstream Data Feed: Calculations and Outlier Analysis

In a previous post, I outlined how to load daily Adobe Analytics Clickstream data feeds into a PostgreSQL database. While this isn’t a long-term scalable solution for large e-commerce companies doing millions of page views per day, for exploratory analysis a relational database structure can work well until a more robust solution is put into place (such as Hadoop/Spark).

Data Validation

Before digging too deeply into the data, we should validate that data from the data feed in our database (custom database view code) matches what we observe from other sources (mainly, the Adobe Analytics interface and/or RSiteCatalyst). Given the Adobe Analytics data feed represents an export of the underlying data, and Adobe provides the formulas in the data feed documentation, in theory you should be able to replicate the numbers exactly:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# "Source 1": Pull data from the API using RSiteCatalyst
library("RSiteCatalyst")
SCAuth(Sys.getenv("USER", ""), Sys.getenv("SECRET", ""))
overtime <- QueueOvertime("zwitchdev",
                           date.from = "2016-04-01",
                           date.to = "2016-05-17",
                           metrics = c("pageviews", "visits", "visitors"),
                           date.granularity = "day")

# "Source 2": Pull data from Postgres database
library(RPostgreSQL)

# Connect to database
conn <- dbConnect(dbDriver("PostgreSQL"),
                 user="postgres",
                 password="",
                 host="localhost",
                 port=5432,
                 dbname="adobe")

dbdata <- dbGetQuery(conn,
                     "select
                     date(date_time) as date_localtime,
                     sum(CASE WHEN post_page_event = '0' THEN 1 END) as pageviews,
                     count(distinct ARRAY_TO_STRING(ARRAY[post_visid_high::text, post_visid_low::text, visit_num::text], '')) as visits,
                     count(distinct ARRAY_TO_STRING(ARRAY[post_visid_high::text, post_visid_low::text], '')) as visitors
                     from usefuldata
                     where date_time between '2016-04-01' and '2016-05-18' and exclude_hit = '0'
                     group by 1
                     order by 1;")

# Compare data sources
> diff_pv = table(overtime$pageviews - dbdata$pageviews)
> diff_pv

0
47

> diff_visits = table(overtime$visits - dbdata$visits)
> diff_visits

0
47

> diff_visitors = table(overtime$visitors - dbdata$visitors)
> diff_visitors

0
47

The code snippet above shows the validation, and sure enough, the “two different sources” show the same exact values (i.e. differences are 0), so everything has been loaded properly into the PostgreSQL database.

Finding Anomalies For Creating Bot Rules

With the data validated, we can now start digging deeper into the data. As an example, although I have bot filtering enabled, this only handles bots on the IAB bot list but not necessarily people trying to scrape my site (or worse).

To create a custom bot rule in Adobe Analytics, you can use IP address(es) and/or User-Agent string. However, as part of data exploration we are not limited to just these features (assuming, of course, that you can map your feature set back to an IP/User-Agent combo). To identify outlier behavior, I’m going to use a technique called ‘local outlier factors’ using the Rlof package in R with the following data features:

  • Distinct Days Visited
  • Total Pageviews
  • Total Visits
  • Distinct Pages Viewed
  • Pageviews Per Visit
  • Average Views Per Page

These aren’t the only features I could’ve used, but it should be pretty easy to view bot/scraper traffic using these metrics. Here’s the code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# Local outlier factor calculation
library(RPostgreSQL)
library(Rlof)

conn <- dbConnect(dbDriver("PostgreSQL"),
                  user="postgres",
                  password="",
                  host="localhost",
                  port=5432,
                  dbname="adobe")

metrics_lof <- dbGetQuery(conn,
                          "select
                          ip,
                          distinct_days_visited,
                          pageviews,
                          visits,
                          distinct_pages_viewed,
                          pageviews/visits::double precision as pv_per_visit,
                          pageviews/distinct_pages_viewed::double precision as avg_views_per_page
                          from
                          (
                          select
                          ip,
                          sum(CASE WHEN post_page_event = '0' THEN 1 END) as pageviews,
                          count(distinct ARRAY_TO_STRING(ARRAY[post_visid_high::text, post_visid_low::text, visit_num::text, visit_start_time_gmt::text], '')) as visits,
                          count(distinct post_pagename) as distinct_pages_viewed,
                          count(distinct date(date_time)) as distinct_days_visited
                          from usefuldata
                          where exclude_hit = '0'
                          group by 1
                          ) a
                          where visits > 1 and pageviews > 1;")


# The higher the value of k, the more likely lof will be calculated...
# ...but more generic the clusters
# NaN/Inf occurs with points on top of one another/div by zero, which is likely...
# ...with web data when most visitors have 1-2 sessions
df_lof<-lof(metrics_lof[, 2:7],k = 20)

# Append results, get top 500 worst scoring IP addresses
results <- cbind(metrics_lof, df_lof)[order(-df_lof),]
worst500 <- head(subset(results, !is.infinite(df_lof)), 500)

A local outlier factor greater than 1 is classified as a potential outlier. Here’s a visual of the lof scores for the top 500 worst scoring IP addresses (vegalite R graph code):

We can see from the graph that there are at least 500 IP addresses that are potential outliers (since the line doesn’t go below a lof value of 1). These points are now a good starting place to go back to our overall table and inspect the entire datafeed records by IP address.

But what about business value?

The example above just scratches the surface on what’s possible when you have access to the raw data from Adobe Analytics. It’s possible to do these calculations on my laptop using R because I only have a few hundred-thousand records and IP addresses. But this kind of ops work is pretty low-value, since unless you are trying to detect system hacking, trying to find hidden scrapers/spiders in your data to filter out just modifies the denominator of your KPIs it doesn’t lead to real money per se.

In the last post of this series, I’ll cover how to work with the datafeed using Spark, and provide an example of using Spark MLLib to increase site engagement.


Adobe: Give Credit. You DID NOT Write RSiteCatalyst.

EDIT 5/10/2016 1:30pm: Several folks from Adobe Analytics/Adobe Marketing Cloud have contacted me, and everything is resolved. I can’t untweet other people’s retweets/shares or delete comments on LinkedIn, but if everyone could stop sharing any more that would be great. 🙂 * * *

As an author of several open-source software projects, I’ve taken for granted that people using the software share the same community values as I do. Open-source authors provide their code “free” to the community so that others may benefit without having to re-invent the wheel. The only expectation (but not an actual requirement per se), is attribution to the package author(s) as a thank you for the time and effort they put into writing and maintaining a quality piece of software.

However, when others take direct credit for writing a package they did not, it crosses into a different realm. Adobe, you DID NOT write RSiteCatalyst, nor have you made any meaningful contributions. To take credit for RSiteCatalyst, either implicitly or explicitly, is a slight to the work of those who have contributed.

Adobe Summit 2014: Attribution!

In the beginning, there seemed to be no problem providing proper attribution. I count Ben Gaines as one of my stronger professional acquaintances (dare I say, even a friend), so I was honored that he not only mentioned me on stage at his Adobe Summit 2014 presentation, but also followed up with an official Adobe blog post re-capping his main points:

rsitecatalyst-attribution

Perfect. My package got wide exposure to the intended audience, which in turn makes it easier to devote time for development and maintenance. The recognition also helped me professionally in that time period, so if I never thanked you publicly Ben, thank you!

Adobe Summit 2015: An Inconspicuous Absence

In 2015, RSiteCatalyst moved from a “Tip” to a full-fledged presentation. I was honored when I first heard that an entire hour would be dedicated to reviewing the package, but no attribution was given:

rsitecatalyst-resources

I mean, it was obviously okay to link to non-Adobe websites like statmethods.net (a great reference btw) and to Shiny…but okay, attribution is not a requirement.

Adobe Summit 2016: ‘We at Adobe…’

The non-mention at Adobe Summit 2015 could be attributed to an oversight; the following during the 2016 RSiteCatalyst Adobe Summit presentation cannot:

rsitecatalyst-randy-zwitch

Just so we’re clear, this isn’t me noticing the slide notes in a PDF or PPT I shouldn’t have access to. The screenshot above is directly from the Adobe Summit video and the statement was said nearly verbatim during the presentation. And it’s not like this was a one-off comment…it’s the same damn presentation as 2015, and I KNOW this script went through several rounds of review and practice by the presenters.

It Costs $0 To Do What Is Right

It may be hard for RSiteCatalyst users to believe, but this was the first open-source project I ever wrote AND the means by which I learned how to write R code AND the first time I ever accessed an API. Since then, Willem Paling did an amazing job refactoring/re-writing a majority of the package when the Adobe Analytics API was updated from version 1.3 to 1.4, and there have been numerous other contributions from the user community. Maybe even one day, the repo will reach even 100 stars on GitHub…

But save for a single commit to a README file from an employee, Adobe you have contributed _zero_to the development and maintenance of this package. To claim otherwise is beyond distasteful to the ethos of open-source software. I’ve never asked for compensation of any kind; and again, I recognize that you don’t even need to attribute the work at all.

Just don’t take credit yourselves for providing this functionality to your customers. You did not write RSiteCatalyst Adobe, a community of (unpaid) volunteers did.


  • Bulk Downloading Adobe Analytics Data
  • Adobe Analytics Clickstream Data Feed: Calculations and Outlier Analysis
  • Adobe: Give Credit. You DID NOT Write RSiteCatalyst.
  • RSiteCatalyst Version 1.4.8 Release Notes
  • Adobe Analytics Clickstream Data Feed: Loading To Relational Database
  • Calling RSiteCatalyst From Python
  • RSiteCatalyst Version 1.4.7 (and 1.4.6.) Release Notes
  • RSiteCatalyst Version 1.4.5 Release Notes
  • Getting Started: Adobe Analytics Clickstream Data Feed
  • RSiteCatalyst Version 1.4.4 Release Notes
  • RSiteCatalyst Version 1.4.3 Release Notes
  • RSiteCatalyst Version 1.4.2 Release Notes
  • Destroy Your Data Using Excel With This One Weird Trick!
  • RSiteCatalyst Version 1.4.1 Release Notes
  • Visualizing Website Pathing With Sankey Charts
  • Visualizing Website Structure With Network Graphs
  • RSiteCatalyst Version 1.4 Release Notes
  • Maybe I Don't Really Know R After All
  • Building JSON in R: Three Methods
  • Real-time Reporting with the Adobe Analytics API
  • RSiteCatalyst Version 1.3 Release Notes
  • Adobe Analytics Implementation Documentation in 60 Seconds
  • RSiteCatalyst Version 1.2 Release Notes
  • Clustering Search Keywords Using K-Means Clustering
  • RSiteCatalyst Version 1.1 Release Notes
  • Anomaly Detection Using The Adobe Analytics API
  • (not provided): Using R and the Google Analytics API
  • My Top 20 Least Useful Omniture Reports
  • For Maximum User Understanding, Customize the SiteCatalyst Menu
  • Effect Of Modified Bounce Rate In Google Analytics
  • Adobe Discover 3: First Impressions
  • Using Omniture SiteCatalyst Target Report To Calculate YOY growth
  • Google Analytics Individual Qualification (IQ) - Passed!
  • Google Analytics SEO reports: Not Ready For Primetime?
  • An Afternoon With Edward Tufte
  • Google Analytics Custom Variables: A Page-Level Example
  • Xchange 2011: Think Tank and Harbor Cruise
  • Google Analytics for WordPress: Two Methods
  • WordPress Stats or Google Analytics? Yes!
  • JuliaCon 2015: Everyday Analytics and Visualization (video)
  • Vega.jl, Rebooted
  • Sessionizing Log Data Using data.table [Follow-up #2]
  • Sessionizing Log Data Using dplyr [Follow-up]
  • Sessionizing Log Data Using SQL
  • Review: Data Science at the Command Line
  • Introducing Twitter.jl
  • Code Refactoring Using Metaprogramming
  • Evaluating BreakoutDetection
  • Creating A Stacked Bar Chart in Seaborn
  • Visualizing Analytics Languages With VennEuler.jl
  • String Interpolation for Fun and Profit
  • Using Julia As A "Glue" Language
  • Five Hard-Won Lessons Using Hive
  • Using SQL Workbench with Apache Hive
  • Getting Started With Hadoop, Final: Analysis Using Hive & Pig
  • Quickly Create Dummy Variables in a Data Frame
  • Using Amazon EC2 with IPython Notebook
  • Adding Line Numbers in IPython/Jupyter Notebooks
  • Fun With Just-In-Time Compiling: Julia, Python, R and pqR
  • Getting Started Using Hadoop, Part 4: Creating Tables With Hive
  • Tabular Data I/O in Julia
  • Hadoop Streaming with Amazon Elastic MapReduce, Python and mrjob
  • A Beginner's Look at Julia
  • Getting Started Using Hadoop, Part 3: Loading Data
  • Innovation Will Never Be At The Push Of A Button
  • Getting Started Using Hadoop, Part 2: Building a Cluster
  • Getting Started Using Hadoop, Part 1: Intro
  • Instructions for Installing & Using R on Amazon EC2
  • Video: SQL Queries in R using sqldf
  • Video: Overlay Histogram in R (Normal, Density, Another Series)
  • Video: R, RStudio, Rcmdr & rattle
  • Getting Started Using R, Part 2: Rcmdr
  • Getting Started Using R, Part 1: RStudio
  • Learning R Has Really Made Me Appreciate SAS