Maybe I don’t have enough to do today, or a long day of vendor calls has made me re-evaluate what I’m doing with my life, but I had a thought:
I feel like in life, you go through many phases, and people come in and out of your life. But Twitter, you generally just accumulate. — Randy Zwitch (@randyzwitch) November 7, 2014
I started on Twitter in December 2009. Quite ironically from where I sit now, I think I joined Twitter because I was an Omniture/Adobe Analytics newcomer, and probably searched Google for some term I didn’t understand. I eventually realized that people were talking about digital analytics on Twitter, so I created an account. Now, of course, I would imagine many would consider me an Adobe Analytics expert, at least in the case of the API and data feeds. And I now use Twitter way differently than I used to.
Since 2009, I’ve gone from banking and a beginner at digital analytics, to a working at an agency on helping them with Omniture, to being a digital analytics consultant at a specialty firm, to a start-up that didn’t work out, to a gigantic media company. I also pretty much never think about ‘web’ analytics, except for the fact that I created and maintain a very specialized R package that maybe a few hundred people in the world use (if I’m lucky).
@chrisolenik Would I follow the same people again if I started fresh. How did I accumulate the list that I did?
When we graduate from high school, then college, get married, the natural progression is that people come into your life and some fall out. But Twitter has a sort of hoarder quality to it. Some people cull their follower list, because they don’t like what the person tweets about or they get in stupid Twitter feuds, but for the most part the list just builds and builds. Others stop using Twitter and you never hear from them again, but you still follow them (their silence?). But it occurs to me, this seems at least tangentially like the Abilene Paradox: at some point, you arrive at a place and you don’t know how you got there. What have I done over the past five years that has lead me to this place where I’m reading about what I do on Twitter?
So I’m going to conduct an experiment. I’m unfollowing all of you without prejudice. Just as if I had a hard drive crash. And I’m going to re-follow all the people I can remember, then re-discover what I’m really interested in getting from Twitter as a platform by re-following friends of friends, people saying interesting things on hashtags, etc. Hope none of you are hurt by my unfollows, but then again, if you are then maybe that says something about what kind of relationship we currently have.
A couple of weeks ago, Twitter open-sourced their BreakoutDetection package for R, a package designed to determine shifts in time-series data. The Twitter announcement does a great job of explaining the main technique for detection (E-Divisive with Medians), so I won’t rehash that material here. Rather, I wanted to see how this package works relative to the anomaly detection feature in the Adobe Analytics API, which I’ve written about previously.
Getting Time-Series Data Using RSiteCatalyst
To use a real-world dataset to evaluate this package, I’m going to use roughly ten months of daily pageviews generated from my blog. The hypothesis here is that if the BreakoutDetection package works well, it should be able to detect the boundaries around when I publish a blog post (of which the dates I know with certainty) and when articles of mine get shared on sites such as Reddit. From past experience, I get about a 3-day lift in pageviews post-publishing, as the article gets tweeted out, published on R-Bloggers or JuliaBloggers and shared accordingly.
Here’s the code to get daily pageviews using RSiteCatalyst (Adobe Analytics):
#Installing BreakoutDetection packageinstall.packages("devtools")devtools::install_github("twitter/BreakoutDetection")library(BreakoutDetection)library("RSiteCatalyst")SCAuth("company","secret")#Get pageviews for each day in 2014pageviews_2014<-QueueOvertime('report-suite',date.from='2014-02-24',date.to='2014-11-05',metric='pageviews',date.granularity='day')#v1.0.1 of package requires specific column names and dataframe formatformatted_df<-pageviews_2014[,c("datetime","pageviews")]names(formatted_df)<-c("timestamp","count")
One thing to notice here is that BreakoutDetection requires either a single R vector or a specifically formatted data frame. In this case, because I have a timestamp, I use lines 17-18 to get the data into the required format.
BreakoutDetection - Default Example
In the Twitter announcement, they provide an example, so let’s evaluate those defaults first:
In order to validate my hypothesis, the package would need to detect 12 ‘breakouts’ or so, as I’ve published 12 blog posts during the sample time period. Mentally drawing lines between the red boundaries, we can see three definitive upward mean shifts, but far fewer than the 12 I expected.
BreakoutDetection - Modifying The Parameters
Given that the chart above doesn’t fit how I think my data are generated, we can modify two main parameters: beta and min.size. From the documentation:
beta: A real numbered constant used to further control the amount of penalization. This is the default form of penalization, if neither (or both) beta or (and) percent are supplied this argument will be used. The default value is beta=0.008.
min.size: The minimum number of observations between change points
The first parameter I’m going to experiment with is min.size, because it requires no in-depth knowledge of the EDM technique! The value used in the first example was 24 (days) between intervals, which seems extreme in my case. It’s reasonable that I might publish a blog post per week, so let’s back that number down to 5 and see how the result changes:
With 17 predicted intervals, we’ve somewhat overshot the number of blog posts mark. Not that the package is wrong per se; the boundaries are surrounding many of the spikes in the data, but perhaps having this many breakpoints isn’t useful from a monitoring standpoint. So setting the min.size parameter somewhere between 5 and 24 points would give us more than 3 breakouts, but less than 17. There is also the beta parameter that can be played with, but I’ll leave that as an exercise for another day.
Even though the idea of both techniques are similar, it’s clear that the two methods don’t quite represent the same thing. In the case of the Adobe Analytics Anomaly Detection, it’s looking datapoint-by-datapoint, with a smoothing model built from the prior 35 points. If a point exceeds the upper- or lower-control limits, then it’s an anomaly, but not necessarily indicative of a true level shift like the BreakoutDetection package is measuring.
The BreakoutDetection package is definitely cool, but it is a bit raw, especially the default graphics. But the package definitely does work, as evidenced by how well it put boundaries around the traffic spikes when I set the min.size parameter equal to five.
Additionally, I tried to read more about the underlying methodology, but the only references that come up in Google seem to be references to the R package itself! I wish I had a better feeling for how the beta parameter influences the graph, but I guess that will come over time as I use the package more. But I’m definitely glad that Twitter open-sourced this package, as I’ve often wondered about how to detect level shifts in a more operational setting, and now I have a method to do so.
In my prior post on visualizing website structure using network graphs, I referenced that network graphs showed the pairwise relationships between two pages (in a bi-directional manner). However, if you want to analyze how your visitors are pathing through your site, you can visualize your data using a Sankey chart.
Visualizing Single Page-to-Next Page Pathing
Most digital analytics tools allow you to visualize the path between pages. In the case of Adobe Analytics, the Next Page Flow diagram is limited to 10 second-level branches in the visualization. However, the Adobe Analytics API has no such limitation, and as such we can use RSiteCatalyst to create the following visualization (GitHub Gist containing R code):
The data processing for this visualization is near identical to the network diagrams. We can use QueuePathing() from RSiteCatalyst to download our pathing data, except in this case, I specified an exact page name as the first level of the pathing pattern instead of using the ::anything:: operator. In all Sankey charts created by d3Network, you can hover over the right-hand side nodes to see the values (you can also drag around the nodes on either side if you desire!). It’s pretty clear from this diagram that I need to do a better job retaining my visitors, as the most common path from this page is to leave. 🙁
Many-to-Many Page Pathing
The example above picks a single page related to Hadoop, then shows how my visitors continue through my site; sometimes, they go to other Hadoop pages, some view Data Science related content or any number of other paths. If we want, however, we can visualize how all visitors path through all pages. Like the force-directed graph, we can get this information by using the ("::anything::", "::anything::") path pattern with QueuePathing():
#Multi-page pathinglibrary("d3Network")library("RSiteCatalyst")#### AuthenticationSCAuth("name","secret")#### Get All Possible Paths with ("::anything::", "::anything::")pathpattern<-c("::anything::","::anything::")next_page<-QueuePathing("zwitchdev","2014-01-01","2014-08-31",metric="pageviews",element="page",pathpattern,top=50000)#Optional step: Cleaning my pagename URLs to remove to domain for claritynext_page$step.1<-sub("http://randyzwitch.com/","",next_page$step.1,ignore.case=TRUE)next_page$step.2<-sub("http://randyzwitch.com/","",next_page$step.2,ignore.case=TRUE)#Filter out Entered Site and duplicate rows, >120 for chart legibilitylinks<-subset(next_page,count>=120&step.1!="Entered Site")#Get unique values of page name to create nodes df#Create an index value, starting at 0nodes<-as.data.frame(unique(c(links$step.1,links$step.2)))names(nodes)<-"name"nodes$nodevalue<-as.numeric(row.names(nodes))-1#Convert string to numeric nodeidlinks<-merge(links,nodes,by.x="step.1",by.y="name")names(links)<-c("step.1","step.2","value","segment.id","segment.name","source")links<-merge(links,nodes,by.x="step.2",by.y="name")names(links)<-c("step.2","step.1","value","segment.id","segment.name","source","target")#Create next page Sankey chartd3output="~/Desktop/sankey_all.html"d3Sankey(Links=links,Nodes=nodes,Source="source",Target="target",Value="value",NodeID="name",fontsize=12,nodeWidth=50,file=d3output,width=750,height=700)
Running the code above provides the following visualization:
For legibility purposes, I’m only plotting paths that occur more than 120 times. But given a large enough display, it would be possible to visualize all valid combinations of paths.
One thing to keep in mind is that with the d3.js library, there is a weird hiccup where if your dataset contains “duplicate” paths such that both Source -> Target & Target -> Source exists, d3.js will go into an infinite loop/not show any visualization. My R code doesn’t provide a solution to this issue, but it should be trivial to remove these “duplicates” should they arise in your dataset.
Unlike the network graphs, Sankey Charts are fairly easy to understand. The “worst” path on my site in terms of keeping visitors on site is where I praised Apple for fixing my MacBook Pro screen out-of-warranty. The easy explanation for this poor performance is that this article attracts people who aren’t really my target audience in data science, but looking for information about getting THEIR screens fixed. If I wanted to engage these readers more, I guess I would need to write more Apple-related content.
To the extent there are multi-stage paths, these tend to be Hadoop and Julia-related content. This makes sense as both technologies are fairly new, I have a lot more content in these areas, and especially in the case of Julia, I’m one of the few people writing practical content. So I’m glad to see I’m achieving some level of success in these areas.