Sessionizing Log Data Using SQL

Over my career as a predictive modeler/data scientist, the most important step(s) in any data project without question have been data cleaning and feature engineering. By taking the data you have, correcting flaws and reformulating raw data into additional business-specific concepts, you ensure that you move beyond pure mathematical optimization and actually solve a business problem. While “big data” is often held up as the future of knowing everything, when it comes down to it, a Hadoop cluster is more often a “Ha-dump” cluster: the place data gets dumped without any proper ETL.

For this blog post, I’m going to highlight a common request for time-series data: combining discrete events into sessions. Whether you are dealing with sensor data, television viewing data, digital analytics data or any other stream of events, the problem of interest is usually how a human interacts with a machine over a given period of time, not each individual event.

While I usually use Hive (Hadoop) for daily work, I’m going to use Postgres (via OSX Postgres.app) to make this as widely accessible as possible. In general, this process will work with any infrastructure/SQL-dialect that supports window functions.

Connecting to Database/Load Data

For lightweight tasks, I find using psql (command-line tool) is easy enough. Here are the commands to create a database to hold our data and to load our two .csv files (download here and here):

psql-load-data

These files contain timestamps generated for 1000 uid values.

Query 1 (“Inner”): Determining Session Boundary Using A Window Function

In order to determine the boundary of each session, we can use a window function along with lag(), which will allow the current row being processed to compare vs. the prior row. Of course, for all of this to work correctly, we need to have our data sorted in time order by each of our users:

1
2
3
4
5
6
7
--Create boundaries at 30 minute timeout
select
uid,
event_timestamp,
(extract(epoch from event_timestamp) - lag(extract(epoch from event_timestamp)) OVER (PARTITION BY uid ORDER BY event_timestamp))/60 as minutes_since_last_interval,
case when extract(epoch from event_timestamp) - lag(extract(epoch from event_timestamp)) OVER (PARTITION BY uid ORDER BY event_timestamp) > 30 * 60 then 1 ELSE 0 END as new_event_boundary
from single_col_timestamp;

For this query, we use the lag() function on the event_timestamp column, and we use over partition by uid order by event_timestamp to define the window over which we want to do our calculation. To provide additional clarification about how this syntax works, I’ve added a column showing how many minutes have passed between intervals to validate that the 30-minute window is calculated correctly. The result is as follows:

sql-session-boundary-definition

For each row where the value of minutes_since_last_interval > 30, there is a value of 1 for new_event_boundary.

Query 2 (“Outer”): Creating A Session ID

The query above defines the event boundaries (which is helpful), but if we want to calculate session-level metrics, we need to create a unique id for each set of rows that are part of one session. To do this, we’re again going to use a window function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
select
uid,
sum(new_event_boundary) OVER (PARTITION BY uid ORDER BY event_timestamp) as session_id,
event_timestamp,
minutes_since_last_interval,
new_event_boundary
from
			--Query 1: Define boundary events
			(select
			uid,
			event_timestamp,
			(extract(epoch from event_timestamp) - lag(extract(epoch from event_timestamp)) OVER (PARTITION BY uid ORDER BY event_timestamp))/60 as minutes_since_last_interval,
			case when extract(epoch from event_timestamp) - lag(extract(epoch from event_timestamp)) OVER (PARTITION BY uid ORDER BY event_timestamp) > 30 * 60 then 1 ELSE 0 END as new_event_boundary
			from single_col_timestamp
			) a;

This query defines the same over partition by uid order by event_timestamp window, but rather than using lag() this time, we’re going to use sum() for the outer query. The effect of using sum() in our window function is to do a cumulative sum; every time 1 shows up, the session_id field gets incremented by 1. If there is a value of 0, the sum is still the same as the row above and thus has the same session_id. This is easier to understand visually:

sessionized-data

At this point, we have a session_id for a group of rows where there have been no 30 minute gaps in behavior.

Final Query: Cleaned Up

Although the previous section is technically done, I usually concatenate the uid and session_id together.  I do this concatenation just to highlight that the value is usually a ‘key’ value, not a metric in itself (though it can be). Concatenating the keys together and removing the teaching columns results in the following query:

1
2
3
4
5
6
7
8
9
10
11
12
13
--Query 3:  Outer query uses window function with sum to do cumulative sum as the id, concatentate to uid
select
uid,
uid || '-' || cast(sum(new_event_boundary) OVER (PARTITION BY uid ORDER BY event_timestamp) as varchar) as session_id,
event_timestamp
from
			--Query 1: Define boundary events
			(select
			uid,
			event_timestamp,
			case when extract(epoch from event_timestamp) - lag(extract(epoch from event_timestamp)) OVER (PARTITION BY uid ORDER BY event_timestamp) > 30 * 60 then 1 ELSE 0 END as new_event_boundary
			from single_col_timestamp
			) a;

final-sessionized-data

Window Functions, Will You Marry Me?

The first time I was asked to try and solve sessionization of time-series data using Hive, I was sure the answer would be that I’d have to get a peer to write some nasty custom Java code to be able generate unique ids; in retrospect, the solution is so obvious and simple that I wish I would’ve tried to do this years ago. This is a pretty easy problem to solve using imperative programming, but if you’ve got a gigantic amount of hardware in a RDBMS or Hadoop, SQL takes care of all of the calculation without needing think through looping (or more complicated logic/data structures).

Window functions fall into a weird space in the SQL language, given that they allow you to do sequential calculations when SQL should generally be thought of as “set-level” calculations (i.e. no implied order and table-wide calculations vs. row/state-specific). But now that I’ve got a hang of them, I can’t imagine my analytical life without them.


RSiteCatalyst Version 1.4.3 Release Notes

It’s a new year, so…new version of RSiteCatalyst on CRAN! For the most part, this release fixes a handful of bugs that weren’t noticed with the prior release 1.4.2 (oops!), but there are pieces of additional functionality.

New functionality: Data Feed monitoring

For those of you having hourly or daily data feeds delivered via FTP, you can now find out the details of a data feed and all of a company’s feeds & the processing status of each using GetFeed() and GetFeeds() respectively.

For example, calling GetFeed() with a specific feed number will return the following information as a data frame:

rsitecatalyst-getfeed

Similarly, if you call GetFeeds("report-suite"), you’ll get the following information as a data frame:

rsitecatalyst-getfeeds

I only have one feed set up for testing, but if there were more feeds delivered each day, they would show up as additional rows in the data frame. The interpretation here is that the daily feed for 1/5/15 was delivered (the 05:00:00 is GMT).

Bug Fixes

RSiteCatalyst v1.4.2 attempted to fix an issue where QueueRanked would error if two SAINT classifications were used. Unfortunately, by fixing that issue, QueueRanked ONLY worked with SAINT Classifications. This was only out in the wild for a month, so hopefully it didn’t really affect anyone.

Additionally, the segment.id and segment.name weren’t printing out to the data frame in the Queue* functions. This has also been fixed.

Test Suite Using Travis CI

To avoid future errors like the ones mentioned above, a full test suite using testthat has been added to RSiteCatalyst and monitored via Travis CI. While there is coverage for every public function within the package, there are likely additional tests that can be added for functionality I didn’t cover. If anyone out there has particularly weird cases they use and aren’t incorporated in the test suite, please feel free to file an issue or submit a pull request and I’ll figure out how to incorporate it into the test suite.

DataWarehouse API

Finally, the last bit of changes to RSiteCatalyst in v1.4.3 are internal preparations for a new package I plan to release in the coming months: AdobeDW. Several folks have asked for the ability to control Data Warehouse reports via R; for various reasons, I thought it made sense to break this out from RSiteCatalyst into its own package. If there are any R-and-Adobe-Analytics enthusiasts out there that would like to help development, please let me know!

Feature Requests/Bugs

As always, if you come across bugs or have feature requests, please continue to use the RSiteCatalyst GitHub Issues page to submit issues. Don’t worry about cluttering up the page with tickets, please fill out a new issue for anything you encounter (with code you’ve already tried and is failing), unless you are SURE that it is the same problem someone else is facing.

And finally, like I end every blog post about RSiteCatalyst, please note that I’m not an Adobe employee. This hasn’t been an issue for a few months, so maybe next time I won’t end the post with this boilerplate :)


Review: Data Science at the Command Line

Admission: I didn’t really know how computers worked until around 2012.

For the majority of my career, I’ve worked for large companies with centralized IT functions. Like many statisticians, I fell into a comfortable position of learning SAS in a Windows environment, had Ops people to fix any Unix problems I’d run into and DBAs to load data into a relational database environment.

Then I became a consultant at a boutique digital analytics firm. To say I was punching above my weight was an understatement. All of the sudden it was time to go into various companies, have a one-hour kickoff meeting, then start untangling the spaghetti mess that represented their various technology systems. I also needed to figure out the boutique firm’s hacked together AWS and Rackspace infrastructure.

data-science-command-line

I’m starting off this review with this admission, because my story of learning to work from the command line parallels Data Science at the Command Line author Jeroen Janssens:

Around five years ago, during my PhD program, I gradually switched from using Microsoft Windows to GNU/Linux…Out of necessity I quickly became comfortable using the command line. Eventually, as spare time got more precious, I settled down with a GNU/Linux distribution known as Ubuntu…

  • Preface, pg. xi

Because a solid majority of people have never learned anything beyond point-and-click interface (Windows or Mac), the title of the book Data Science at the Command Line is somewhat unfortunate; this is a book for ANYONE looking to start manipulating files efficiently from the command line.

Getting Started, Safely

One of the best parts of Data Science at the Command Line is that it comes with a pre-built virtual machine with 80-100 or more command line tools installed. This is a very fast and safe way to get started with the command line, as the tools are pre-installed and no matter what command you run while you’re learning, you won’t destroy a computer you actually care about!

Chapters 2 and 3 move through the steps of installing the virtual machine, explaining the essential concepts of the command line, some basic commands showing simple (but powerful!) ways to chain command line tools together and how to obtain data. What I find so refreshing about these two chapters by Janssens is that the author assumes zero knowledge of the command line by the reader; these two chapters are the most accessible summary of how and why to use the command line I’ve ever read (Zed Shaw’s CLI tutorial is a close second, but is quite terse).

The OSEMN model

The middle portion of book covers the OSEMN model (Obtain-Scrub-Explore-Model-iNterpret) of data science; another way this book is refreshing is that rather than jump right into machine learning/predictive modeling, the author spends a considerable amount of time covering the gory details of real analysis projects: manipulating data from the format you receive (XML, JSON, sloppy CSV files, etc.) and taking the (numerous) steps required to get the format you want.

By introducing tools such as csvkit (csv manipulation), jq (JSON processor), and classic tools such as sed (stream editor) and (g)awk, the reader gets a full treatment of how to deal with malformed data files (which in my experience are the only type available in the wild!) . Chapter 6 (“Managing Your Data Workflow”) is also a great introduction into reproducible research using Drake (Make for Data Analysis). This is an area that I will personally be focusing my time on, as I tend to run a lot of one-off commands in HDFS and as of now, just copy them into a plain-text file. Reproducing = copy-paste in my case, which defeats the purpose of computers and scripting!

An Idea Can Be Stretched Too Far

Chapters 8 and 9 cover Parallel Processing using GNU Parallel and Modeling Data respectively. While GNU Parallel is a tool I could see using sometime in the future, I do feel like building models and creating visualizations straight from the command line is getting pretty close to just being a parlor trick. Yes, it’s obviously possible to do such things (and the author even wrote his own command line tool Rio for using R from the command line), but with the amount of iteration, feature building and fine-tuning that goes on, I’d rather use IPython Notebook or RStudio to give me the flexibility I need to really iterate effectively.

A Book For Everyone

As I mentioned above, I really feel that Data Science at the Command Line is a book well suited for anyone who does data analysis. Jeroen Janssens has done a fantastic job of taking his original “7 command-line tools for data science” blog post and extending the idea to a full-fledged book. This book has a prominent place in my work library next to Python for Data Analysis and in the past two months I’ve referred to each book at roughly the same rate. For under $30 for paperback at Amazon, there’s more than enough content to make you a better data scientist.


  • Self-Service Adobe Analytics Data Feeds!
  • RSiteCatalyst Version 1.4.10 Release Notes
  • WordPress to Jekyll: A 30x Speedup
  • Bulk Downloading Adobe Analytics Data
  • Adobe Analytics Clickstream Data Feed: Calculations and Outlier Analysis
  • Adobe: Give Credit. You DID NOT Write RSiteCatalyst.
  • RSiteCatalyst Version 1.4.8 Release Notes
  • Adobe Analytics Clickstream Data Feed: Loading To Relational Database
  • Calling RSiteCatalyst From Python
  • RSiteCatalyst Version 1.4.7 (and 1.4.6.) Release Notes
  • RSiteCatalyst Version 1.4.5 Release Notes
  • Getting Started: Adobe Analytics Clickstream Data Feed
  • RSiteCatalyst Version 1.4.4 Release Notes
  • RSiteCatalyst Version 1.4.3 Release Notes
  • RSiteCatalyst Version 1.4.2 Release Notes
  • Destroy Your Data Using Excel With This One Weird Trick!
  • RSiteCatalyst Version 1.4.1 Release Notes
  • Visualizing Website Pathing With Sankey Charts
  • Visualizing Website Structure With Network Graphs
  • RSiteCatalyst Version 1.4 Release Notes
  • Maybe I Don't Really Know R After All
  • Building JSON in R: Three Methods
  • Real-time Reporting with the Adobe Analytics API
  • RSiteCatalyst Version 1.3 Release Notes
  • Adobe Analytics Implementation Documentation in 60 Seconds
  • RSiteCatalyst Version 1.2 Release Notes
  • Clustering Search Keywords Using K-Means Clustering
  • RSiteCatalyst Version 1.1 Release Notes
  • Anomaly Detection Using The Adobe Analytics API
  • (not provided): Using R and the Google Analytics API
  • My Top 20 Least Useful Omniture Reports
  • For Maximum User Understanding, Customize the SiteCatalyst Menu
  • Effect Of Modified Bounce Rate In Google Analytics
  • Adobe Discover 3: First Impressions
  • Using Omniture SiteCatalyst Target Report To Calculate YOY growth
  • Google Analytics Individual Qualification (IQ) - Passed!
  • Google Analytics SEO reports: Not Ready For Primetime?
  • An Afternoon With Edward Tufte
  • Google Analytics Custom Variables: A Page-Level Example
  • Xchange 2011: Think Tank and Harbor Cruise
  • Google Analytics for WordPress: Two Methods
  • WordPress Stats or Google Analytics? Yes!
  • Building a Data Science Workstation (2017)
  • JuliaCon 2015: Everyday Analytics and Visualization (video)
  • Vega.jl, Rebooted
  • Sessionizing Log Data Using data.table [Follow-up #2]
  • Sessionizing Log Data Using dplyr [Follow-up]
  • Sessionizing Log Data Using SQL
  • Review: Data Science at the Command Line
  • Introducing Twitter.jl
  • Code Refactoring Using Metaprogramming
  • Evaluating BreakoutDetection
  • Creating A Stacked Bar Chart in Seaborn
  • Visualizing Analytics Languages With VennEuler.jl
  • String Interpolation for Fun and Profit
  • Using Julia As A "Glue" Language
  • Five Hard-Won Lessons Using Hive
  • Using SQL Workbench with Apache Hive
  • Getting Started With Hadoop, Final: Analysis Using Hive & Pig
  • Quickly Create Dummy Variables in a Data Frame
  • Using Amazon EC2 with IPython Notebook
  • Adding Line Numbers in IPython/Jupyter Notebooks
  • Fun With Just-In-Time Compiling: Julia, Python, R and pqR
  • Getting Started Using Hadoop, Part 4: Creating Tables With Hive
  • Tabular Data I/O in Julia
  • Hadoop Streaming with Amazon Elastic MapReduce, Python and mrjob
  • A Beginner's Look at Julia
  • Getting Started Using Hadoop, Part 3: Loading Data
  • Innovation Will Never Be At The Push Of A Button
  • Getting Started Using Hadoop, Part 2: Building a Cluster
  • Getting Started Using Hadoop, Part 1: Intro
  • Instructions for Installing & Using R on Amazon EC2
  • Video: SQL Queries in R using sqldf
  • Video: Overlay Histogram in R (Normal, Density, Another Series)
  • Video: R, RStudio, Rcmdr & rattle
  • Getting Started Using R, Part 2: Rcmdr
  • Getting Started Using R, Part 1: RStudio
  • Learning R Has Really Made Me Appreciate SAS