It’s been about six months since the last RSiteCatalyst update, and this update is really just a single bug fix, but a big bug fix at that!
Sparse Data = Opaque Error Messages
Numerous people have reported receiving an error message from RSiteCatalyst similar to the following:
‘names’ attribute  must be the same length as the vector 
This is about the least helpful message that could’ve been returned, but it was an R message indicating an internal function trying to overwrite the column names vector (which had non-zero length) with a vector of length zero (which is an error in the context of a data frame). Thankfully, Willem Paling was able to squash this bug (hopefully) once-and-for-all; the error occurs when a user tries to do a Queue*` report with multiple breakdowns, where NULL data is returned by the Adobe API for one of the breakdowns.
So hopefully, if you’ve run into this error before (which I have to imagine was quite frustrating), you shouldn’t see this again with v1.4.4 of RSiteCatalyst. Additionally, tests will be added to the test suite to attempt to trigger this warning, so that this horrible monster of a bug doesn’t appear again.
The only other change of substance was to modify the message returned after calling SCAuth(); some users were having issues with API calls not working, after RSiteCatalyst having returned 'Authentication Succeeded' to the console. RSiteCatalyst never actually validates that your credentials are correct, just that they are stored within the session. The console message has been updated to reflect this.
Proper Punctuation Prevents Poor Documentation!
The Eagle-Eyed among you might have noticed that my DESCRIPTION file was out of CRAN spec for many months. This has now been fixed, so that the meaning is as clear as possible.
As always, if you come across bugs or have feature requests, please continue to use the RSiteCatalyst GitHub Issues page to submit issues. Don’t worry about cluttering up the page with tickets, please fill out a new issue for anything you encounter (with code you’ve already tried and is failing), unless you are SURE that it is the same problem someone else is facing.
Contributors are also very welcomed. If there is a feature you’d like added, and especially if you can fix an outstanding issue reported at GitHub, we’d love to have your contributions. Willem and I are both parents of young children and have real jobs outside of open-source software creation, so we welcome any meaningful contributions to RSiteCatalyst that anyone would like to contribute.
Recently, I’ve found myself without a project to hack on, and I’ve always been interested in learning more about browser-based visualization. So I decided to revive the work that John Myles White had done in building Vega.jl nearly two years ago. And since I’ll be giving an analytics & visualization workshop at JuliaCon 2015, I figure I better study the topic in a bit more depth.
Back In Working Order!
The first thing I tackled here was to upgrade the syntax to target v0.4 of Julia. This is just my developer preference, to avoid using Compat.jl when there are so many more visualizations I’d like to support. So if you’re using v0.4, you shouldn’t see any deprecation errors; if you’re using v0.3, well, eventually you’ll use v0.4!
Additionally, I modified the package to recognize the traction that Jupyter Notebook has gained in the community. Whereas the original version of Vega.jl only displayed output in a tab in a browser, I’ve overloaded the writemime method to display :VegaVisualization inline for any environment that can display HTML. If you use Vega.jl from the REPL, you’ll still get the same default browser-opening behavior as existed before.
The First Visualization You Added Was A Pie Chart…
…And Followed With a Donut Chart?
Yup. I’m a troll like that. Besides, being loudly against pie charts is blowhardy (even if studies have shown that people are too stupid to evaluate them).
Adding these two charts (besides trolling) was a proof-of-concept that I understood the codebase sufficiently in order to extend the package. Now that the syntax is working for Julia v0.4, I understand how the package works (important!), and have improved the workflow by supporting Jupyter Notebook, I plan to create all of the visualizations featured in the Trifacta Vega Editor and other standard visualizations such as boxplots. If the community has requests for the order of implementation, I’ll try and accommodate them. Just add a feature request on Vega.jl GitHub issues.
Why Not Gadfly? You’re Not Starting A Language War, Are You?
No, I’m not that big of a troll. Besides, I don’t think we’ve squeezed all the juice (blood?!) out of the R vs. Python infographic yet, we don’t need another pointless debate.
My sole reason for not improving Gadfly is just that I plain don’t understand how the codebase works! There are many amazing computer scientists & developers in the Julia community, and I’m not really one of them. I do, however, understand how to generate JSON strings and in that sense, Vega is the perfect platform for me to contribute.
If you’re interested in visualization, as well as learning Julia and/or contributing to a package, Vega.jl might be a good place to start. I’m always up for collaborating with people, and creating new visualizations isn’t that difficult (especially with the Trifacta examples). So hopefully some of you will be interested in enough to join me to adding one more great visualization library to the Julia community.
Thanks to user dnlbrky, we now have a third way to accomplish sessionizing log data for any arbitrary time out period (see methods 1 and 2), this time using data.table from R along with magrittr for piping:
library(magrittr)library(data.table)## Download, unzip, and load data (first 10,000 lines):
single_col_timestamp<-url("http://randyzwitch.com/wp-content/uploads/2015/01/single_col_timestamp.csv.gz")%>%gzcon%>%readLines(n=10000L)%>%textConnection%>%read.csv%>%setDT## Convert to timestamp:
single_col_timestamp[,event_timestamp:=as.POSIXct(event_timestamp)]## Order by uid and event_timestamp:
setkey(single_col_timestamp,uid,event_timestamp)## Sessionize the data (more than 30 minutes between events is a new session):
single_col_timestamp[,session_id:=paste(uid,cumsum((c(0,diff(event_timestamp))/60>30)*1),sep="_"),by=uid]## Examine the results:
#single_col_timestamp[uid %like% "a55bb9"]
I agree with dnlbrky in that this feels a little better than the dplyr method for heavy SQL users like me, but ultimately, I still think the SQL method is the most elegant and obvious to understand. But that’s the great thing with open-source software; pick any tool you want, accomplish whatever you choose using any method you choose.