About a month ago, I switched this blog from WordPress hosted on Bluehost to Jekyll on GitHub Pages. I suspected moving to a static website would be faster than generated HTML via PHP, and it is certainly cheaper (GitHub Pages is “free”). But it wasn’t until I needed a dataset for doing some dataset visualization development that I realize how much of an improvement it has been!
Packages, Packages, Packages
With the release of v0.5 of Julia, I’ve been working (less) on updating my packages and making new packages (more), because making new stuff is more fun than maintaining old stuff! One of the packages I’ve been building is for the ECharts visualization library (v3) from Baidu. While Julia doesn’t necessarily need another visualization library, visualization is something I’m interested in and learning is easier when you’re solving problems you like. And since the world doesn’t need another Iris example, I decided to share some real world website performance data :)
usingECharts,DataFrames#Read in datadf=readtable("/assets/data/website_time_data.csv")#Make data two different series that overlap, so endpoint touchesdf[:pre]=[(x<="2016-09-06"?x:nothing)forxinzip(df[:date],df[:loadtime_ms])]df[:post]=[(x>="2016-09-06"?x:nothing)forxinzip(df[:date],df[:loadtime_ms])]#Graph codel=line(df[:date],hcat(df[:pre],df[:post]))l.ec_width=800seriesnames!(l,["loadtime_ms","post"])colorscheme!(l,palette=("acw","FlatUI"))yAxis!(l,name="Load time in ms")title!(l,text="randyzwitch.com",subtext="Switching from WordPress on Bluehost to Jekyll on GitHub (2016/09/06)")toolbox!(l,chartTypes=["bar","line"])slider!(l)
Even though I switched to Jekyll on WordPress on 9/6/2016, it appears that the page cache for Google Webmaster Tools didn’t really expire until 9/12/2016 or so. At the average case, the load time went from 1128ms to 38ms! Of course, this isn’t really a fair comparison, as presumably GitHub Pages runs on much better hardware than the cheap Bluehost hosting I have, and I didn’t reimplement most of the garbage I had on the WordPress version of the blog. But from a user-experience standpoint, good lord what an improvement!
Want to test out further functionality, here are some box plots of the load time variation:
usingECharts,DataFrames#Read in datadf=readtable("/Users/randyzwitch/Desktop/website_load_time.csv")df[:pre]=[(x<="2016-09-06"?x:nothing)forxinzip(df[:date],df[:loadtime_ms])]df[:post]=[(x>="2016-09-12"?x:nothing)forxinzip(df[:date],df[:loadtime_ms])]#Remove nullspre=[xforxindf[:pre]ifx!=nothing]post=[xforxindf[:post]ifx!=nothing]#Graph codeb=box([pre,post],names=["WordPress","Jekyll"])b.ec_width=800colorscheme!(b,palette=("acw","VitaminC"))yAxis!(b,name="Load time in ms",nameGap=50,min=0)title!(b,text="randyzwitch.com",subtext="Switching from WordPress on Bluehost to Jekyll on GitHub (2016/09/06)")toolbox!(b)
Usually, a box plot comparison that is as smushed as the Jekyll plot vs the WordPress one would be a poor visualization, but in this case I think it actually works. The load time for the Jekyll version of this blog is so quick and so consistent that it barely registers as an outlier if it were WordPress! It’s crazy to think that the -1.5 * IQR time for WordPress is the mean/median/min load time of Jekyll.
Where To Go Next?
This blog post is really just an interesting finding from my experience moving to Jekyll on GitHub. As it stands now, ECharts.jl is stil in pre-METADATA mode. Right now, I assume that this would be a useful enough package to submit to METADATA some day, but I guess that depends on how much further I get smoothing the rough edges. If there are people who are interested in cleaning up this package further, I’d absolutely love to collaborate.
This blog post also serves as release notes for RSiteCatalyst v1.4.9, as only one feature was added (batch report request and download). But it’s a feature big enough for its own post!
Recently, I was asked how I would approach replicating the market basket analysis blog post I wrote for 33 Sticks, but using a lot more data. Like, months and months of order-level data. While you might be able to submit multiple months worth of data in a single RSiteCatalyst call, it’s a lot more elegant to request data from the Adobe Analytics API in several calls. With the new batch-submit and batch-receive functionality in RSiteCatalyst, this process can be a LOT faster.
Prior to version 1.4.9 of RSiteCatalyst, API calls could only be made in a serial fashion:
The underlying assumption from a package development standpoint was that the user would be working in an interactive fashion; submit a report request, wait to get the answer back. There’s nothing inherently wrong with this code from an R standpoint that made this a slow process, you just had to wait until one report was calculated by the Adobe Analytics API until the next one was submitted.
Of course, most APIs can process multiple calls simultaneously, and the Adobe Analytics API is no exception. Thanks to user shashispace, it’s now possible to submit all of your report calls at once, then retrieve the results:
This code is nearly identical to the serial snippet above, except for 1) the addition of the enqueueOnly = TRUE keyword argument and 2) lowering the interval.seconds keyword argument to 1 second instead of 60. When you use the enqueueOnly keyword, instead of returning the report results back, a Queue* function will return the report.id; by accumulating these report.id values in a list, we can next retrieve the reports and bind them together using dplyr.
Performance gain: 4x speed-up
Although the code snippets are nearly identical, it is way faster to submit the reports all at once then retrieve the results. By submitting the requests all at once, the API will process numerous calls at once, and while you are retrieving the results of one call the others will continue to process in the background.
I wouldn’t have thought this would make such a difference, but retrieving one month of daily order-level data went from taking 2420 seconds to 560 seconds! If you were to retrieve the same amount of daily data, but for an entire year, that would mean saving 6 hours in processing time.
Keep The Pull Requests Coming!
The last several RSiteCatalyst releases have been driven by contributions from the community and I couldn’t be happier! Given that I don’t spend much time in my professional life now using Adobe Analytics, having improvements driven by a community of users using the library daily is just so rewarding.
So please, if you have a comment for improvement (and especially if you find a bug), please submit an issue on GitHub. Submitting questions and issues to GitHub is the easiest way for me to provide support, while also giving other users the possibility to answer your question before I might. It will also provide a means for others to determine if they are experiencing a new or previously-known problem.
In a previous post, I outlined how to load daily Adobe Analytics Clickstream data feeds into a PostgreSQL database. While this isn’t a long-term scalable solution for large e-commerce companies doing millions of page views per day, for exploratory analysis a relational database structure can work well until a more robust solution is put into place (such as Hadoop/Spark).
Before digging too deeply into the data, we should validate that data from the data feed in our database (custom database view code) matches what we observe from other sources (mainly, the Adobe Analytics interface and/or RSiteCatalyst). Given the Adobe Analytics data feed represents an export of the underlying data, and Adobe provides the formulas in the data feed documentation, in theory you should be able to replicate the numbers exactly:
# "Source 1": Pull data from the API using RSiteCatalyst
library("RSiteCatalyst")SCAuth(Sys.getenv("USER",""),Sys.getenv("SECRET",""))overtime<-QueueOvertime("zwitchdev",date.from="2016-04-01",date.to="2016-05-17",metrics=c("pageviews","visits","visitors"),date.granularity="day")# "Source 2": Pull data from Postgres database
library(RPostgreSQL)# Connect to database
date(date_time) as date_localtime,
sum(CASE WHEN post_page_event = '0' THEN 1 END) as pageviews,
count(distinct ARRAY_TO_STRING(ARRAY[post_visid_high::text, post_visid_low::text, visit_num::text], '')) as visits,
count(distinct ARRAY_TO_STRING(ARRAY[post_visid_high::text, post_visid_low::text], '')) as visitors
where date_time between '2016-04-01' and '2016-05-18' and exclude_hit = '0'
group by 1
order by 1;")# Compare data sources
The code snippet above shows the validation, and sure enough, the “two different sources” show the same exact values (i.e. differences are 0), so everything has been loaded properly into the PostgreSQL database.
Finding Anomalies For Creating Bot Rules
With the data validated, we can now start digging deeper into the data. As an example, although I have bot filtering enabled, this only handles bots on the IAB bot list but not necessarily people trying to scrape my site (or worse).
To create a custom bot rule in Adobe Analytics, you can use IP address(es) and/or User-Agent string. However, as part of data exploration we are not limited to just these features (assuming, of course, that you can map your feature set back to an IP/User-Agent combo). To identify outlier behavior, I’m going to use a technique called ‘local outlier factors’ using the Rlof package in R with the following data features:
Distinct Days Visited
Distinct Pages Viewed
Pageviews Per Visit
Average Views Per Page
These aren’t the only features I could’ve used, but it should be pretty easy to view bot/scraper traffic using these metrics. Here’s the code:
# Local outlier factor calculation
pageviews/visits::double precision as pv_per_visit,
pageviews/distinct_pages_viewed::double precision as avg_views_per_page
sum(CASE WHEN post_page_event = '0' THEN 1 END) as pageviews,
count(distinct ARRAY_TO_STRING(ARRAY[post_visid_high::text, post_visid_low::text, visit_num::text, visit_start_time_gmt::text], '')) as visits,
count(distinct post_pagename) as distinct_pages_viewed,
count(distinct date(date_time)) as distinct_days_visited
where exclude_hit = '0'
group by 1
where visits > 1 and pageviews > 1;")# The higher the value of k, the more likely lof will be calculated...
# ...but more generic the clusters
# NaN/Inf occurs with points on top of one another/div by zero, which is likely...
# ...with web data when most visitors have 1-2 sessions
df_lof<-lof(metrics_lof[,2:7],k=20)# Append results, get top 500 worst scoring IP addresses
A local outlier factor greater than 1 is classified as a potential outlier. Here’s a visual of the lof scores for the top 500 worst scoring IP addresses (vegalite R graph code):
We can see from the graph that there are at least 500 IP addresses that are potential outliers (since the line doesn’t go below a lof value of 1). These points are now a good starting place to go back to our overall table and inspect the entire datafeed records by IP address.
But what about business value?
The example above just scratches the surface on what’s possible when you have access to the raw data from Adobe Analytics. It’s possible to do these calculations on my laptop using R because I only have a few hundred-thousand records and IP addresses. But this kind of ops work is pretty low-value, since unless you are trying to detect system hacking, trying to find hidden scrapers/spiders in your data to filter out just modifies the denominator of your KPIs it doesn’t lead to real money per se.
In the last post of this series, I’ll cover how to work with the datafeed using Spark, and provide an example of using Spark MLLib to increase site engagement.