Decrease in wait time for API calls (from 5 seconds to 2 seconds) and extending total number of API tries before report failure (from 100 seconds to 10 minutes)
For those of you Adobe Analytics (Omniture) users who haven’t yet tried to use the Adobe Analytics API, I’ve created an introduction video to get started. There will also continue to be examples of using this package on this blog on the RSiteCatalyst tag. Enjoy!
In the previous three tutorials (1, 2, 3), we’ve covered the background of Hadoop, how to build a proof-of-concept Hadoop cluster using Amazon EC2 and how to upload a .zip file to the cluster using Hue. In Part 4, we’ll use the data uploaded from the .zip file to create a master table of all files, as well as creating a view.
Creating Tables Using Hive
Like SQL for ‘regular’ relational databases, Hive is the tool we can use within Hadoop to create tables from data loaded into HDFS. Because Hadoop was built with large, messy data in mind, there are some amazingly convenient features for creating and loading data, such as being able to load all files in a directory (assuming they have the same format). Here’s the Hive statement we can use to load the airline dataset:
-- Create table from yearly airline csv filesCREATEEXTERNALTABLEairline(`Year`int,`Month`int,`DayofMonth`int,`DayOfWeek`int,`DepTime`int,`CRSDepTime`int,`ArrTime`int,`CRSArrTime`int,`UniqueCarrier`string,`FlightNum`int,`TailNum`string,`ActualElapsedTime`int,`CRSElapsedTime`int,`AirTime`int,`ArrDelay`int,`DepDelay`int,`Origin`string,`Dest`string,`Distance`int,`TaxiIn`int,`TaxiOut`int,`Cancelled`int,`CancellationCode`string,`Diverted`string,`CarrierDelay`int,`WeatherDelay`int,`NASDelay`int,`SecurityDelay`int,`LateAircraftDelay`int)ROWFORMATDELIMITEDFIELDSTERMINATEDBY','ESCAPEDBY'\\'STOREDASTEXTFILELOCATION'/user/hue/airline/airline';
The above statement starts by outlining the structure of the table, which is mostly integers with a few string columns. The next four lines of code specifies what type of data we have, which are delimited files where the fields are terminated (separated) by commas and where the delimiter is escaped using a backslash. Finally, we type the location of our files, which is the location of the directory where we uploaded the .zip file in part 3 of this tutorial. Note that we specify an “external table”, which means that if we drop the ‘airline’ table, we will still retain our raw data. Had we not specified the external keyword, Hive would’ve moved our raw data files into Hive, and had we decided to drop the ‘airline’ table, all our data would be deleted. Specifying external also lets us build multiple tables on the same underlying dataset if we so choose.
Creating a View Using Hive
One thing that’s slightly awkward above Hive is that you can’t specify that there is a header row in your files. As such, once the above code loads, we have 22 rows in our ‘airline’ table where the data is invalid. Another thing that’s awkward about Hive is that there is no row-level operations, so you can’t delete data! However, we can very easily fix our problem using a view:
-- Create view to "remove" 22 bad records from our tablecreateviewvw_airlineasselect*fromairlinewhereuniquecarrier<>"UniqueCarrier";
Now that we have our view defined, we no longer have to explicitly exclude the rows in every future query we run. Just like in SQL, views are “free” from a performance standpoint, as they don’t require any additional data storage space (they just represent stored code references).
Time for Analysis?!
If you’ve made it this far, you’ve waited a long time to do some actual analysis! The next and final part of this tutorial will do some interesting things using Hive and/or Pig to analyze the data. The origin of this dataset was a data mining contest to predict why a flight would arrive late to its destination and we’ll do examples towards that end.
As digital marketers & analysts, we’re often asked to quantify when a metric goes beyond just random variation and becomes an actual “unexpected” result. In cases such as A/B..N testing, it’s easy to calculate a t-test to quantify the difference between two testing populations, but for time-series metrics, using a t-test is likely not appropriate.
To determine whether a time-series has become “out-of-control”, we can use Exponential Smoothing to forecast the Expected Value, as well as calculate Upper Control Limits (UCL) and Lower Control Limits (LCL). To the extent a data point exceeds the UCL or falls below the LCL, we can say that statistically a time-series is no longer within the expected range. There are numerous ways to create time-series models using R, but for the purposes of this blog post I’m going to focus on Exponential Smoothing, which is how the anomaly detection feature is implemented within the Adobe Analytics API.
Holt-Winters & Exponential Smoothing
There are three techniques that the Adobe Analytics API uses to build time-series models:
The formulas behind each of the techniques are easily found elsewhere, but the main point behind the three techniques is that time-series data can have a long-term trend (Double Exponential Smoothing) and/or a seasonal trend (Triple Exponential Smoothing). To the extent that a time-series has a seasonal component, the seasonal component can be additive (a fixed amount of increase across the series, such as the number of degrees increase in temperature in Summer) or multiplicative (a multiplier relative to the level of the series, such as a 10% increase in sales during holiday periods).
The Adobe Analytics API simplifies the choice of which technique to use by calculating a forecast using all three methods, then choosing the method that has the best fit as calculated by the model having the minimum (squared) error. It’s important to note that while this is probably an okay model selection method for detecting anomalies, this method does not guarantee that the model chosen is the actual “best” forecast model to fit the data.
RSiteCatalyst API call
Using the RSiteCatalyst R package version 1.1, it’s trivial to access the anomaly detection feature:
#Run until version > 1.0 on CRANlibrary(devtools)install_github("RSiteCatalyst","randyzwitch",ref="master")#Run if version >= 1.1 on CRANlibrary("RSiteCatalyst")#API AuthenticationSCAuth(<username:company>,<shared_secret>)#API function callpageviews_w_forecast<-QueueOvertime(reportSuiteID=<reportsuite>,dateFrom="2013-06-01",dateTo="2013-08-13",metrics="pageviews",dateGranularity="day",anomalyDetection="1")
Once the function call is run, you will receive a DataFrame of ‘Day’ granularity with the actual metric and three additional columns for the forecasted value, UCL and LCL. Graphing these data using ggplot2 (Graph Code Here - GitHub Gist), we can now see on which days an anomalous result occurred:
The red dots in the graph above indicate days where page views either exceeded the UCL or fell below the LCL. On July 23 - 24 timeframe, traffic to this blog spiked dramatically due to a blog post about the Julia programming language, and continued to stay above the norm for about a week afterwards.
Anomaly Detection Limitations
There two limitations to keep in mind when using the Anomaly Detection feature of the Adobe Analytics API:
Anomaly Detection is currently only available for ‘Day’ granularity
Forecasts are built on 35 days of past history
In neither case do I view these limitations as dealbreakers. The first limitation is just an engineering decision, which I’m sure could be expanded if enough people used this functionality.
For the time period of 35 days to build the forecasts, this is an area where there is a balance between calculation time vs. capturing a long-term and/or seasonal trend in the data. Using 35 days as your time period, you get five weeks of day-of-week seasonality, as well as 35 points to calculate a ‘long-term’ trend. If the time period is of concern in terms of what constitutes a ‘good forecast’, then there are plenty of other techniques that can be explored using R (or any other statistical software for that matter).
Elevating the discussion
I have to give a hearty ‘Well Done!’ to the Adobe Analytics folks for elevating the discussion in terms of digital analytics. By using statistical techniques like Exponential Smoothing, analysts can move away from qualitative statements like “Does it look like something is wrong with our data?” to actually quantifying when KPIs are “too far” away from the norm and should be explored further.