Version 1.4.10 of RSiteCatalyst brings a handful of new Get* methods, QueueDatawarehouse and a couple of bugs fixes/low-level code improvements.
The most useful user-facing change IMO is the addition of the QueueDatawarehouse method, which allows for submitting (and sometimes retrieving) Data Warehouse requests via R. This should be a huge timesaver for those of you using Data Warehouse as a substitute for the Adobe Analytics raw data feed (my employer alone pulls hundreds of Data Warehouse feeds per day).
In the coming days, I’ll write a blog post in more detail about how to use this method effectively to query Data Warehouse, but in the meantime, here’s a sample function call:
#API CredentialsSCAuth(Sys.getenv("USER",""),Sys.getenv("SECRET",""))#FTP CredentialsFTP<-Sys.getenv("FTP","")FTPUSER<-Sys.getenv("FTPUSER","")FTPPW<-Sys.getenv("FTPPW","")#Write QueueWarehouse to FTPreport.id<-QueueDataWarehouse("report-suite","2016-11-01","2016-11-07",c("visits","pageviews"),c("page"),enqueueOnly=TRUE,ftp=list(host=FTP,port="21",directory="/DWtest/",username=FTPUSER,password=FTPPW,filename="myreport.csv"))
ViewProcessingRules has also been added on an experimental basis, as this API method is not documented (as of the writing of this post), so Adobe may choose to modify or remove this in a future release. As the method name indicates, this new feature allows for the viewing of the processing rules for a list of report suites, including the behaviors that define the rules. There is currently no (public) method for setting processing rules via the API.
As processing rules are a super-user functionality, I would love it if someone in the community could verify that this method works for a large number of report suites/rules.
Get* method additions
Three Get* methods were added in this release: GetVirtualReportSuiteSettings, GetReportSuiteGroups, GetTimeStampEnabled, each corresponding to settings able to be viewed within the Adobe Analytics admin panel.
GetRealTimeSettings: fixed bug where a passing a list of report suites lead to a parsing error
QueueSummary: Redefined method arguments to make date an optional parameter, allowing for more elegant use of date.to and date.from parameters.
At long last, I’ve added integration testing via the AppVeyor testing service. For the longest time, I’ve had a very nonchalant attitude towards Windows (since I don’t use it), but given that RSiteCatalyst is enterprise software and so many businesses use Windows, I figured it was time.
Luckily, there were none of the tests in the test suite throws any errors specifically due to Windows, so effectively this change is just defensive programming against that class of error in the future.
As in the past several releases, there have been contributions from the community keeping RSiteCatalyst moving forward! Special thanks to Diego Villuendas Pellicero for writing the QueueDataWarehouse functionality and Johann de Boer for highlighting that QueueSummary could be improved.
I encourage all users of the software to continue reporting bugs via GitHub issues, and especially if you can provide a working code example. Even better, a fix via pull request will ensure that your bug will be addressed in a timely manner and for the benefit to others in the community.
About a month ago, I switched this blog from WordPress hosted on Bluehost to Jekyll on GitHub Pages. I suspected moving to a static website would be faster than generated HTML via PHP, and it is certainly cheaper (GitHub Pages is “free”). But it wasn’t until I needed a dataset for doing some dataset visualization development that I realize how much of an improvement it has been!
Packages, Packages, Packages
With the release of v0.5 of Julia, I’ve been working (less) on updating my packages and making new packages (more), because making new stuff is more fun than maintaining old stuff! One of the packages I’ve been building is for the ECharts visualization library (v3) from Baidu. While Julia doesn’t necessarily need another visualization library, visualization is something I’m interested in and learning is easier when you’re solving problems you like. And since the world doesn’t need another Iris example, I decided to share some real world website performance data :)
usingECharts,DataFrames#Read in datadf=readtable("/assets/data/website_time_data.csv")#Make data two different series that overlap, so endpoint touchesdf[:pre]=[(x<="2016-09-06"?x:nothing)forxinzip(df[:date],df[:loadtime_ms])]df[:post]=[(x>="2016-09-06"?x:nothing)forxinzip(df[:date],df[:loadtime_ms])]#Graph codel=line(df[:date],hcat(df[:pre],df[:post]))l.ec_width=800seriesnames!(l,["loadtime_ms","post"])colorscheme!(l,palette=("acw","FlatUI"))yAxis!(l,name="Load time in ms")title!(l,text="randyzwitch.com",subtext="Switching from WordPress on Bluehost to Jekyll on GitHub (2016/09/06)")toolbox!(l,chartTypes=["bar","line"])slider!(l)
Even though I switched to Jekyll on WordPress on 9/6/2016, it appears that the page cache for Google Webmaster Tools didn’t really expire until 9/12/2016 or so. At the average case, the load time went from 1128ms to 38ms! Of course, this isn’t really a fair comparison, as presumably GitHub Pages runs on much better hardware than the cheap Bluehost hosting I have, and I didn’t reimplement most of the garbage I had on the WordPress version of the blog. But from a user-experience standpoint, good lord what an improvement!
Want to test out further functionality, here are some box plots of the load time variation:
usingECharts,DataFrames#Read in datadf=readtable("/Users/randyzwitch/Desktop/website_load_time.csv")df[:pre]=[(x<="2016-09-06"?x:nothing)forxinzip(df[:date],df[:loadtime_ms])]df[:post]=[(x>="2016-09-12"?x:nothing)forxinzip(df[:date],df[:loadtime_ms])]#Remove nullspre=[xforxindf[:pre]ifx!=nothing]post=[xforxindf[:post]ifx!=nothing]#Graph codeb=box([pre,post],names=["WordPress","Jekyll"])b.ec_width=800colorscheme!(b,palette=("acw","VitaminC"))yAxis!(b,name="Load time in ms",nameGap=50,min=0)title!(b,text="randyzwitch.com",subtext="Switching from WordPress on Bluehost to Jekyll on GitHub (2016/09/06)")toolbox!(b)
Usually, a box plot comparison that is as smushed as the Jekyll plot vs the WordPress one would be a poor visualization, but in this case I think it actually works. The load time for the Jekyll version of this blog is so quick and so consistent that it barely registers as an outlier if it were WordPress! It’s crazy to think that the -1.5 * IQR time for WordPress is the mean/median/min load time of Jekyll.
Where To Go Next?
This blog post is really just an interesting finding from my experience moving to Jekyll on GitHub. As it stands now, ECharts.jl is stil in pre-METADATA mode. Right now, I assume that this would be a useful enough package to submit to METADATA some day, but I guess that depends on how much further I get smoothing the rough edges. If there are people who are interested in cleaning up this package further, I’d absolutely love to collaborate.
This blog post also serves as release notes for RSiteCatalyst v1.4.9, as only one feature was added (batch report request and download). But it’s a feature big enough for its own post!
Recently, I was asked how I would approach replicating the market basket analysis blog post I wrote for 33 Sticks, but using a lot more data. Like, months and months of order-level data. While you might be able to submit multiple months worth of data in a single RSiteCatalyst call, it’s a lot more elegant to request data from the Adobe Analytics API in several calls. With the new batch-submit and batch-receive functionality in RSiteCatalyst, this process can be a LOT faster.
Prior to version 1.4.9 of RSiteCatalyst, API calls could only be made in a serial fashion:
The underlying assumption from a package development standpoint was that the user would be working in an interactive fashion; submit a report request, wait to get the answer back. There’s nothing inherently wrong with this code from an R standpoint that made this a slow process, you just had to wait until one report was calculated by the Adobe Analytics API until the next one was submitted.
Of course, most APIs can process multiple calls simultaneously, and the Adobe Analytics API is no exception. Thanks to user shashispace, it’s now possible to submit all of your report calls at once, then retrieve the results:
This code is nearly identical to the serial snippet above, except for 1) the addition of the enqueueOnly = TRUE keyword argument and 2) lowering the interval.seconds keyword argument to 1 second instead of 60. When you use the enqueueOnly keyword, instead of returning the report results back, a Queue* function will return the report.id; by accumulating these report.id values in a list, we can next retrieve the reports and bind them together using dplyr.
Performance gain: 4x speed-up
Although the code snippets are nearly identical, it is way faster to submit the reports all at once then retrieve the results. By submitting the requests all at once, the API will process numerous calls at once, and while you are retrieving the results of one call the others will continue to process in the background.
I wouldn’t have thought this would make such a difference, but retrieving one month of daily order-level data went from taking 2420 seconds to 560 seconds! If you were to retrieve the same amount of daily data, but for an entire year, that would mean saving 6 hours in processing time.
Keep The Pull Requests Coming!
The last several RSiteCatalyst releases have been driven by contributions from the community and I couldn’t be happier! Given that I don’t spend much time in my professional life now using Adobe Analytics, having improvements driven by a community of users using the library daily is just so rewarding.
So please, if you have a comment for improvement (and especially if you find a bug), please submit an issue on GitHub. Submitting questions and issues to GitHub is the easiest way for me to provide support, while also giving other users the possibility to answer your question before I might. It will also provide a means for others to determine if they are experiencing a new or previously-known problem.