Anomaly Detection Using The Adobe Analytics API

anomaly-detection-adobe-analytics

As digital marketers & analysts, we’re often asked to quantify when a metric goes beyond just random variation and becomes an actual “unexpected” result. In cases such as A/B..N testing, it’s easy to calculate a t-test to quantify the difference between two testing populations, but for time-series metrics, using a t-test is likely not appropriate.

To determine whether a time-series has become “out-of-control”, we can use Exponential Smoothing to forecast the Expected Value, as well as calculate Upper Control Limits (UCL) and Lower Control Limits (LCL). To the extent a data point exceeds the UCL or falls below the LCL, we can say that statistically a time-series is no longer within the expected range. There are numerous ways to create time-series models using R, but for the purposes of this blog post I’m going to focus on Exponential Smoothing, which is how the anomaly detection feature is implemented within the Adobe Analytics API.

Holt-Winters & Exponential Smoothing

There are three techniques that the Adobe Analytics API uses to build time-series models:

  • Holt-Winters Additive (Triple Exponential Smoothing)
  • Holt-Winters Multiplicative (Triple Exponential Smoothing)
  • Holt Trend Corrected (Double Exponential Smoothing)

The formulas behind each of the techniques are easily found elsewhere, but the main point behind the three techniques is that time-series data can have a long-term trend (Double Exponential Smoothing) and/or a seasonal trend (Triple Exponential Smoothing). To the extent that a time-series  has a seasonal component, the seasonal component can be additive (a fixed amount of increase across the series, such as the number of degrees increase in temperature in Summer) or multiplicative (a multiplier relative to the level of the series, such as a 10% increase in sales during holiday periods).

The Adobe Analytics API simplifies the choice of which technique to use by calculating a forecast using all three methods, then choosing the method that has the best fit as calculated by the model having the minimum (squared) error. It’s important to note that while this is probably an okay model selection method for detecting anomalies, this method does not guarantee that the model chosen is the actual “best” forecast model to fit the data.


RSiteCatalyst API call

Using the RSiteCatalyst R package version 1.1, it’s trivial to access the anomaly detection feature:Once the function call is run, you will receive a DataFrame of ‘Day’ granularity with the actual metric and three additional columns for the forecasted value, UCL and LCL.  Graphing these data using ggplot2 (Graph Code Here – GitHub Gist), we can now see on which days an anomalous result occurred:

Huge spike in traffic July 23 - 24

The red dots in the graph above indicate days where page views either exceeded the UCL or fell below the LCL. On July 23 – 24 timeframe, traffic to this blog spiked dramatically due to a blog post about the Julia programming language, and continued to stay above the norm for about a week afterwards.

Anomaly Detection Limitations

There two limitations to keep in mind when using the Anomaly Detection feature of the Adobe Analytics API:

  • Anomaly Detection is currently only available for ‘Day’ granularity
  • Forecasts are built on 35 days of past history

In neither case do I view these limitations as dealbreakers. The first limitation is just an engineering decision, which I’m sure could be expanded if enough people used this functionality.

For the time period of 35 days to build the forecasts, this is an area where there is a balance between calculation time vs. capturing a long-term and/or seasonal trend in the data. Using 35 days as your time period, you get five weeks of day-of-week seasonality, as well as 35 points to calculate a ‘long-term’ trend. If the time period is of concern in terms of what constitutes a ‘good forecast’, then there are plenty of other techniques that can be explored using R (or any other statistical software for that matter).

Elevating the discussion

I have to give a hearty ‘Well Done!’ to the Adobe Analytics folks for elevating the discussion in terms of digital analytics. By using statistical techniques like Exponential Smoothing, analysts can move away from qualitative statements like “Does it look like something is wrong with our data?” to  actually quantifying when KPIs are “too far” away from the norm and should be explored further.

Comments

  1. Awesome post, Randy.

  2. Great work Randy! Thank you for your contribution to the Analytics community with this walkthrough post.

    • Randy Zwitch says:

      Thanks Brian. I’m really glad that you guys are starting to build functionality like this into the platform to get the industry moving away from just counting & reporting to actually using legitimate statistical techniques.

  3. Very nice to see stat models being added into the platforms, especially with auto fitting. Next gen will most likely have a large pool of predictive models that will fit to your data and generate lift graphs. As in next gen I’m talking about adobe and google platforms. IBM/Unica has had this tech in other platforms. SAS is leaps ahead too. But for pure play web analytics its getting there.

    • Randy Zwitch says:

      For this use case, I’m glad Adobe has added in the auto-selection of the model, since detecting anomalies in digital analytics data is a low risk scenario (i.e., intended use is to generate ideas for further analysis).

      For actual prediction though, I’m not sure I’d agree with any vendor adding that into their platform. It’s been my experience that an auto-selected model is rarely the “best” model to use when the prediction matters, either due to spurious correlations or stability issues.

  4. Excellent Article, Randy. Anomaly detection has been very popular and widely used in areas like network traffic intrusion and financial fraud. I am beginning to understand how it can be applied in the area of Web Analytics as well. At my organisation, we too have developed a tool for Anomaly Detection for Web Analytics data.

    • Randy Zwitch says:

      Thanks Kushan! You’re absolutely right, network traffic intrusion and financial fraud are great use cases here. In my past in banking, I’ve seen neural networks used for both the cases you describe, though any non-linear/flexible modeling framework can be appropriate.

  5. Thanks for such a nice post!

  6. Thanks for the post, I hadn’t realized you could do that with your excellent RSiteCatalyst package. If at some point you’d be willing to share your ggplot2 code for your graph, I’d be most interested, as I’m trying to improve my ggplot2 skills :-)

  7. Have a look at a paper called “Smart Multi-Modal Marine Monitoring via Visual Analysis
    and Data Fusion” it will be soon published at this year’s MAED workshop at ACMMM conference 22nd Oct 2013, they used a different method to detect anomaly sensor readings :-)

  8. Hi Randy,
    I have tried to replicate your Rscript as explained in the blog post. The UCL and LCL values seem not return the expected result. It seems the Exponential Smoothing is not applied.
    I would like to show you my output, how can I send it?

    • Randy Zwitch says:

      Hi Maria –

      You can send me an email using the email address listed in the package documentation Description file.

  9. Adam Gitzes says:

    Hi Folks,

    I am curious if anybody else is experiencing a similar issue to the one I am encountering:

    Depending on the range of days that I choose to run the QueueOverTime formula, the plot of the UCL and LCL are wildly different. For page views in shorter time ranges the limits dynamically follow the graph relatively well for changes in traffic during the weekend, but if I extend the time period out my limits extend and become static.

    Reading the post I believe this must have to do with a different model being selected, but I am confused by the constant 35 days of data feeding the algorithm.

    Has anybody else faced this issue, and what length of date range did your team decide to move forward with? Thank you.

    • Randy Zwitch says:

      Hey Adam –

      How long of a time period are you selecting? As long as the start date of the report stays the same, my understanding is that the exact same model should be used (i.e. model built using 35 days prior to the first date in the report).

      Alternatively, if you are seeing oddities in the model outputs, this might be something to contact ClientCare about. RSiteCatalyst is just a connector, so the results that come through are what was returned back from the API, I don’t really have any insight other than wild speculation :)

      • Adam Gitzes says:

        Hey,

        Yes I just read through the documentation and read about the start date being the determining factor. I experimented with a few different dates and I did find this to be the case. It is interesting. Since the code uses the 35 days before the start date for training I suppose I can just run the code for yesterday to yesterday to have the best idea if yesterday was an anomaly. I will discuss with my team, as well as my Adobe representative and follow up on what I do. Thank you.

Trackbacks

  1. […] Mí oblíbení Indové z Tatvicu mají o R pro webovou analytiku a predikci mnoho praktických článků (vhodné např. pro e-shopy). Velmi aktuálním tématem je detekce anomálií, o které Ravi Pathak povídal na konferenci Superweek 2014. K dispozici je také starší video o anomáliích v klíčových ukazatelích. I když Adobe Analytics obdobné techniky přidává rovnou do nástroje, někteří na to využívají API a R. […]

%d bloggers like this: