Vega.jl, Rebooted

pie donut

Mmmmm, baked goods!

Rebooting Vega.jl

Recently, I’ve found myself without a project to hack on, and I’ve always been interested in learning more about browser-based visualization. So I decided to revive the work that John Myles White had done in building Vega.jl nearly two years ago. And since I’ll be giving an analytics & visualization workshop at JuliaCon 2015, I figure I better study the topic in a bit more depth.

Back In Working Order!

The first thing I tackled here was to upgrade the syntax to target v0.4 of Julia. This is just my developer preference, to avoid using Compat.jl when there are so many more visualizations I’d like to support. So if you’re using v0.4, you shouldn’t see any deprecation errors; if you’re using v0.3, well, eventually you’ll use v0.4!

Additionally, I modified the package to recognize the traction that Jupyter Notebook has gained in the community. Whereas the original version of Vega.jl only displayed output in a tab in a browser, I’ve overloaded the writemime method to display :VegaVisualization inline for any environment that can display HTML. If you use Vega.jl from the REPL, you’ll still get the same default browser-opening behavior as existed before.

The First Visualization You Added Was A Pie Chart…

…And Followed With a Donut Chart?

Yup. I’m a troll like that. Besides, being loudly against pie charts is blowhardy (even if studies have shown that people are too stupid to evaluate them).

Adding these two charts (besides trolling) was a proof-of-concept that I understood the codebase sufficiently in order to extend the package. Now that the syntax is working for Julia v0.4, I understand how the package works (important!), and have improved the workflow by supporting Jupyter Notebook, I plan to create all of the visualizations featured in the Trifacta Vega Editor and other standard visualizations such as boxplots. If the community has requests for the order of implementation, I’ll try and accommodate them. Just add a feature request on Vega.jl GitHub issues.

Why Not Gadfly? You’re Not Starting A Language War, Are You?

No, I’m not that big of a troll. Besides, I don’t think we’ve squeezed all the juice (blood?!) out of the R vs. Python infographic yet, we don’t need another pointless debate.

My sole reason for not improving Gadfly is just that I plain don’t understand how the codebase works! There are many amazing computer scientists & developers in the Julia community, and I’m not really one of them. I do, however, understand how to generate JSON strings and in that sense, Vega is the perfect platform for me to contribute.

Collaborators Wanted!

If you’re interested in visualization, as well as learning Julia and/or contributing to a package, Vega.jl might be a good place to start. I’m always up for collaborating with people, and creating new visualizations isn’t that difficult (especially with the Trifacta examples). So hopefully some of you will be interested in enough to join me to adding one more great visualization library to the Julia community.


Sessionizing Log Data Using data.table [Follow-up #2]

Thanks to user dnlbrky, we now have a third way to accomplish sessionizing log data for any arbitrary time out period (see methods 1 and 2), this time using data.table from R along with magrittr for piping:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
library(magrittr)
library(data.table)

## Download, unzip, and load data (first 10,000 lines):
single_col_timestamp <- url("http://randyzwitch.com/wp-content/uploads/2015/01/single_col_timestamp.csv.gz") %>%
  gzcon %>%
  readLines(n=10000L) %>%
  textConnection %>%
  read.csv %>%
  setDT

## Convert to timestamp:
single_col_timestamp[, event_timestamp:=as.POSIXct(event_timestamp)]

## Order by uid and event_timestamp:
setkey(single_col_timestamp, uid, event_timestamp)

## Sessionize the data (more than 30 minutes between events is a new session):
single_col_timestamp[, session_id:=paste(uid, cumsum((c(0, diff(event_timestamp))/60 > 30)*1), sep="_"), by=uid]

## Examine the results:
#single_col_timestamp[uid %like% "a55bb9"]
single_col_timestamp[session_id %like% "fc895c3babd"]

I agree with dnlbrky in that this feels a little better than the dplyr method for heavy SQL users like me, but ultimately, I still think the SQL method is the most elegant and obvious to understand. But that’s the great thing with open-source software; pick any tool you want, accomplish whatever you choose using any method you choose.


Sessionizing Log Data Using dplyr [Follow-up]

Last week, I wrote a blog post showing how to sessionize log data using standard SQL. The main idea of that post is that if your analytics platform supports window functions (like Postgres and Hive do), you can make quick work out of sessionizing logs. Here’s the winning query:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
select
uid,
sum(new_event_boundary) OVER (PARTITION BY uid ORDER BY event_timestamp) as session_id,
event_timestamp,
minutes_since_last_interval,
new_event_boundary
from
			--Query 1: Define boundary events
			(select
			uid,
			event_timestamp,
			(extract(epoch from event_timestamp) - lag(extract(epoch from event_timestamp)) OVER (PARTITION BY uid ORDER BY event_timestamp))/60 as minutes_since_last_interval,
			case when extract(epoch from event_timestamp) - lag(extract(epoch from event_timestamp)) OVER (PARTITION BY uid ORDER BY event_timestamp) > 30 * 60 then 1 ELSE 0 END as new_event_boundary
			from single_col_timestamp
			) a;

One nested sub-query and two window functions are all it takes to calculate the event boundaries and create a unique identifier for sessions for any arbitrary timeout chosen.

It’s Hadley’s House, We’re Just Leasing

Up until today, I hadn’t really done anything using dplyr.  But having a bunch of free time this week and hearing people talk so much about how great dplyr is, I decided to see what it would take to replicate this same exercise using R. dplyr has support for Postgres as a back-end, and has verbs that translate R code into window functions, so I figured it had to be possible. Here’s what I came up with:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
###Sessionization using dplyr

library(dplyr)

#Open a localhost connection to Postgres
#Use table 'single_col_timestamp'
#group by uid and sort by timestamp for window function
#Do minutes calculation, working around missing support for extract(epoch from timestamp)
#Calculate event boundary and unique id via cumulative sum window function
sessions <-  
        src_postgres("logfiles") %>%
        tbl("single_col_timestamp") %>%
        group_by(uid) %>%
        arrange(event_timestamp) %>%
        mutate(minutes_since_last_event = (
                                           DATE_PART('day', event_timestamp - lag(event_timestamp)) * 24 +
                                           DATE_PART('hour', event_timestamp - lag(event_timestamp)) * 60 +
                                           DATE_PART('minute', event_timestamp - lag(event_timestamp)) * 60 +
                                           DATE_PART('second', event_timestamp - lag(event_timestamp))
                                           ) / 60
              ) %>%
        mutate(event_boundary = if(minutes_since_last_event > 30) 1 else 0,
               session_id = order_by(event_timestamp, cumsum(if(minutes_since_last_event > 30) 1 else 0)))

#Show query syntax
show_query(sessions)

#Actually run the query
answer <- collect(sessions)

Generally, I’m not a fan of the pipe operator, but I figured I’d give it a shot since everyone else seems to like it. This is one nasty bit of R code, but ultimately, it is possible to get the same result as writing SQL directly. I did need to take a few roundabout ways, specifically in calculating the minutes between timestamps and substituting the CASE expression into the window function rather than call it by name, but it’s basically the same logic.

Why Does This Work?

If you compare the SQL code above to the R code, you might be wondering why the dplyr code works. Certainly, working the dplyr way gives me cognitive dissonance, as you generally specify the verbs you are using in reverse order as you do in SQL. But calling show_query(sessions), you actually see that dplyr is generating SQL under-the-hood (I formatted the code for easier viewing):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
SELECT
	"uid",
	"event_timestamp",
	"minutes_since_last_event",
	"event_boundary",
	"session_id"
FROM (
		SELECT
			"uid",
			"event_timestamp",
			"minutes_since_last_event",
			CASE WHEN "minutes_since_last_event" > 30.0 THEN 1.0 ELSE 0.0 END AS "event_boundary",
			sum(CASE WHEN "minutes_since_last_event" > 30.0 THEN 1.0 ELSE 0.0 END) OVER (PARTITION BY "uid" ORDER BY "event_timestamp" ROWS UNBOUNDED PRECEDING) AS "session_id"
		FROM
			(
				SELECT
					"uid",
					"event_timestamp",
					(DATE_PART('day', "event_timestamp" - LAG("event_timestamp", 1, NULL) OVER (PARTITION BY "uid" ORDER BY "event_timestamp")) * 24.0
						+ DATE_PART('hour', "event_timestamp" - LAG("event_timestamp", 1, NULL) OVER (PARTITION BY "uid" ORDER BY "event_timestamp")) * 60.0
						+ DATE_PART('minute', "event_timestamp" - LAG("event_timestamp", 1, NULL) OVER (PARTITION BY "uid" ORDER BY "event_timestamp")) * 60.0
						+ DATE_PART('second', "event_timestamp" - LAG("event_timestamp", 1, NULL) OVER (PARTITION BY "uid" ORDER BY "event_timestamp"))) / 60.0 AS "minutes_since_last_event"
				FROM "single_col_timestamp"
				ORDER BY "uid", "event_timestamp"
			) AS "_W1"
	) AS "_W2"

Like all SQL-generating tools, the code is a bit inelegant; however, I have to say that I’m truly impressed the dplyr code was able to handle this scenario at all, given that this example has to be at least an edge-, if not a corner-case of what dplyr is meant for in terms of data manipulation.

So, dplyr Is Going To Become Part Of Your Toolbox?

While it was possible to re-create the same functionality, ultimately, I don’t see myself using dplyr a whole lot. In the case of using databases, it seems more efficient and portable just to write the SQL directly; at the very least, it’s what I’m already comfortable doing as part of my analytics workflow. For manipulating data frames, maybe I’d use it (I do use plyr extensively in my RSiteCatalyst package), but I’d probably be more inclined to use sqldf instead.

But that’s just me, not a reflection on the package quality. Happy manipulating, however you choose to do it! 🙂


  • Self-Service Adobe Analytics Data Feeds!
  • RSiteCatalyst Version 1.4.10 Release Notes
  • WordPress to Jekyll: A 30x Speedup
  • Bulk Downloading Adobe Analytics Data
  • Adobe Analytics Clickstream Data Feed: Calculations and Outlier Analysis
  • Adobe: Give Credit. You DID NOT Write RSiteCatalyst.
  • RSiteCatalyst Version 1.4.8 Release Notes
  • Adobe Analytics Clickstream Data Feed: Loading To Relational Database
  • Calling RSiteCatalyst From Python
  • RSiteCatalyst Version 1.4.7 (and 1.4.6.) Release Notes
  • RSiteCatalyst Version 1.4.5 Release Notes
  • Getting Started: Adobe Analytics Clickstream Data Feed
  • RSiteCatalyst Version 1.4.4 Release Notes
  • RSiteCatalyst Version 1.4.3 Release Notes
  • RSiteCatalyst Version 1.4.2 Release Notes
  • Destroy Your Data Using Excel With This One Weird Trick!
  • RSiteCatalyst Version 1.4.1 Release Notes
  • Visualizing Website Pathing With Sankey Charts
  • Visualizing Website Structure With Network Graphs
  • RSiteCatalyst Version 1.4 Release Notes
  • Maybe I Don't Really Know R After All
  • Building JSON in R: Three Methods
  • Real-time Reporting with the Adobe Analytics API
  • RSiteCatalyst Version 1.3 Release Notes
  • Adobe Analytics Implementation Documentation in 60 Seconds
  • RSiteCatalyst Version 1.2 Release Notes
  • Clustering Search Keywords Using K-Means Clustering
  • RSiteCatalyst Version 1.1 Release Notes
  • Anomaly Detection Using The Adobe Analytics API
  • (not provided): Using R and the Google Analytics API
  • My Top 20 Least Useful Omniture Reports
  • For Maximum User Understanding, Customize the SiteCatalyst Menu
  • Effect Of Modified Bounce Rate In Google Analytics
  • Adobe Discover 3: First Impressions
  • Using Omniture SiteCatalyst Target Report To Calculate YOY growth
  • Google Analytics Individual Qualification (IQ) - Passed!
  • Google Analytics SEO reports: Not Ready For Primetime?
  • An Afternoon With Edward Tufte
  • Google Analytics Custom Variables: A Page-Level Example
  • Xchange 2011: Think Tank and Harbor Cruise
  • Google Analytics for WordPress: Two Methods
  • WordPress Stats or Google Analytics? Yes!
  • Building a Data Science Workstation (2017)
  • JuliaCon 2015: Everyday Analytics and Visualization (video)
  • Vega.jl, Rebooted
  • Sessionizing Log Data Using data.table [Follow-up #2]
  • Sessionizing Log Data Using dplyr [Follow-up]
  • Sessionizing Log Data Using SQL
  • Review: Data Science at the Command Line
  • Introducing Twitter.jl
  • Code Refactoring Using Metaprogramming
  • Evaluating BreakoutDetection
  • Creating A Stacked Bar Chart in Seaborn
  • Visualizing Analytics Languages With VennEuler.jl
  • String Interpolation for Fun and Profit
  • Using Julia As A "Glue" Language
  • Five Hard-Won Lessons Using Hive
  • Using SQL Workbench with Apache Hive
  • Getting Started With Hadoop, Final: Analysis Using Hive & Pig
  • Quickly Create Dummy Variables in a Data Frame
  • Using Amazon EC2 with IPython Notebook
  • Adding Line Numbers in IPython/Jupyter Notebooks
  • Fun With Just-In-Time Compiling: Julia, Python, R and pqR
  • Getting Started Using Hadoop, Part 4: Creating Tables With Hive
  • Tabular Data I/O in Julia
  • Hadoop Streaming with Amazon Elastic MapReduce, Python and mrjob
  • A Beginner's Look at Julia
  • Getting Started Using Hadoop, Part 3: Loading Data
  • Innovation Will Never Be At The Push Of A Button
  • Getting Started Using Hadoop, Part 2: Building a Cluster
  • Getting Started Using Hadoop, Part 1: Intro
  • Instructions for Installing & Using R on Amazon EC2
  • Video: SQL Queries in R using sqldf
  • Video: Overlay Histogram in R (Normal, Density, Another Series)
  • Video: R, RStudio, Rcmdr & rattle
  • Getting Started Using R, Part 2: Rcmdr
  • Getting Started Using R, Part 1: RStudio
  • Learning R Has Really Made Me Appreciate SAS