Building JSON in R: Three Methods

When I set out to build RSiteCatalyst, I had a few major goals: learn R, build CRAN-worthy package and learn the Adobe Analytics API. As I reflect back on how the package has evolved over the past two years and what I’ve learned, I think my greatest learning was around how to deal with JSON (and strings in general).  

JSON is ubiquitous as a data-transfer mechanism over the web, and R does a decent job providing the functionality to not only read JSON but also to create JSON. There are at least three methods I know of to build JSON strings, and this post will cover the pros and cons of each method.

Method 1: Building JSON using paste

As a beginning R user, I didn’t have the awareness of how many great user-contributed packages are out there. So throughout the RSiteCatalyst source code you can see gems like:

1
2
3
4
5
6
7
8
#"metrics" would be a user input into a function arguments
metrics <- c("a", "b", "c")

#Loop over the metrics list, appending proper curly braces
metrics_conv <- lapply(metrics, function(x) paste('{"id":', '"', x, '"', '}', sep=""))

#Collapse the list into a proper comma separated string
metrics_final <- paste(metrics_conv, collapse=", ")

The code above loops over a character vector (using lapply instead of a for loop like a good R user!), appending curly braces, then flattening the list down to a string. While this code works, it’s a quite brittle way to build JSON. You end up needing to worry about matching quotation marks, remembering if you need curly braces, brackets or singletons…overall, it’s a maintenance nightmare to build strings this way.

Of course, if you have a really simple JSON string you need to build, paste() doesn’t have to be off-limits, but for a majority of the cases I’ve seen, it’s probably not a good idea.

Method 2: Building JSON using sprintf

Somewhere in the middle of building version 1 of RSiteCatalyst, I started learning Python. For those of you who aren’t familiar, Python has a string interpolation operator %, which allows you to do things like the following:

1
2
3
In [1]: print "Here's a string subtitution for my name: %s" %("Randy")

Out[1]: "Here's a string subtitution for my name: Randy"

Thinking that this was the most useful thing I’d ever seen in programming, I naturally searched to see if R had the same functionality. Of course, I quickly learned that all C-based languages have printf/sprintf, and R is no exception. So I started building JSON using sprintf in the following manner:

1
2
3
4
5
elements_list = sprintf('{"id":"%s",
                          "top": "%s",
                          "startingWith":"%s",
                          "search":{"type":"%s", "keywords":[%s]}
                          }', element, top, startingWith, searchType, searchKW2)

In this example, we’re now passing R objects into the sprintf() function, with %s tokens everywhere we need to substitute text. This is certainly an improvement over paste(), especially given that Adobe provides example JSON via their API explorer. So I copied the example strings, replaced their examples with my tokens and voilà! Better JSON string building.

Method 3: Building JSON using a package (jsonlite, rjson or RJSONIO)

While sprintf() allowed for much easier JSON, there is still a frequent code smell in RSiteCatalyst, as evidenced by the following:

1
2
3
4
5
6
7
8
9
#Converts report_suites to JSON
if(length(report_suites)>1){
  report_suites <- toJSON(report_suites)
} else {
  report_suites <- toJSON(list(report_suites))
}

#API request
json <- postRequest("ReportSuite.GetTrafficVars",paste('{"rsid_list":', report_suites , '}'))

At some point, I realized that using the toJSON() function from rjson would take care of the formatting R objects to strings, yet I didn’t make the leap to understanding that I could build the whole string using R objects translated by toJSON()! So I have more hard-to-maintain code where I’m checking the class/length of objects and formatting them. The efficient way to do this using rjson would be:

1
2
3
4
5
6
7
#Efficient method
library(rjson)
report_suites <- list(rsid_list=c("A", "B", "C"))
request.body <- toJSON(report_suites)

#API request
json <- postRequest("ReportSuite.GetTrafficVars", request.body)

With the code above, we’re building JSON in a very R-looking manner; just R objects and functions, and in return getting the output we want. While it’s slightly less obvious what is being created by request.body, there’s literally zero bracket-matching, quoting issues or anything else to worry about in building our JSON. That’s not to say that there isn’t a learning curve to using a JSON package, but I’d rather figure out whether I need a character vector or list than burn my eyes out looking for mismatched quotes and brackets!

Collaborating Makes You A Better Programmer

Like any pursuit, you can get pretty far on your own through hard work and self-study. However, I wouldn’t be nearly where I am without collaborating with others (especially learning about how to build JSON properly in R!). A majority of the RSiteCatalyst code for the upcoming version 1.4 was re-written by Willem Paling, where he added consistency to keyword arguments, switched to jsonlite for better JSON parsing to Data Frames, and most importantly for the topic of this post, cleaned up the method of building all the required JSON strings!

Edit 5/13: For a more thorough example of building complex JSON using jsonlite, check out this example from the v1.4 branch of RSiteCatalyst. The linked example R code populates the required arguments from this JSON outline provide by Adobe.


Using SQL Workbench with Apache Hive

If you’ve spent any non-trivial amount of time working with Hadoop and Hive at the command line, you’ve likely wished that you could interact with Hadoop like you would any other database. If you’re lucky, your Hadoop administrator has already installed the Apache Hue front-end to your cluster, which allows for interacting with Hadoop via an easy-to-use browser interface. However, if you don’t have Hue, Hive also supports access via JDBC; the downside is, setup is not as easy as including a single JDBC driver.

While there are paid database administration tools such as Aqua Data Studio that support Hive, I’m an open source kind of guy, so this tutorial will show you how to use SQL Workbench to access Hive via JDBC. This tutorial assumes that you are proficient enough to get SQL Workbench installed on whatever computing platform you are using (Windows, OSX, or Linux).

Download Hadoop jars

The hardest part of using Hive via JDBC is getting all of the required jars. At work I am using a MapR distribution of Hadoop, and each Hadoop vendor platform provides drivers for their version of Hadoop. For MapR, all of the required Java .jar files are located at /opt/mapr/hive/hive-0.1X/lib (where X represents the Hive version number you are using).

mapr-hive-jars

Download all the .jar files in one shot, just in case you need them in the future

Since it’s not always clear which .jar files are required (especially for other projects/setups you might be doing), I just downloaded the entire set of files and placed them in a directory called hadoop_jars. If you’re not using MapR, you’ll need to find and download your vendor-specific version of the following .jar files:

  • hive-exec.jar
  • hive-jdbc.jar
  • hive-metastore.jar
  • hive-service.jar

Additionally, you will need the following general Hadoop jars (Note: for clarity/long-term applicability of this blog post, I have removed the version number from all of the jars):

  • hive-cli.jar
  • libfb303.jar
  • slf4j-api.jar
  • commons-logging.jar
  • hadoop-common.jar
  • httpcore.jar
  • httpclient.jar

Whew. Once you have the Hive JDBC driver and the 10 other .jar files, we can begin the installation process.

Setting up Hive JDBC driver

Setting up the JDBC driver is simply a matter of providing SQL Workbench with the location of all 11 of the required .jar files. After clicking File -> Manage Drivers, you’ll want to click on the white page icon to create a New Driver. Use the Folder icon to add the .jars:

sqlworkbench-hive-driver-setup

For the Classname box, if you are using a relatively new version of Hive, you’ll be using Hive2 server. In that case, the Classname for the Hive driver is org.apache.hive.jdbc.HiveDriver (this should pop up on-screen, you just need to select the value). You are not required to put any value for the Sample URL. Hit OK and the driver window will close.

Connection Window

With the Hive driver defined, all that’s left is to define the connection string. Assuming your Hadoop administrator didn’t change the default port from 10000, your connection string should look as follows:

sqlworkbench-hive-connectionstring

As stated above, I’m assuming you are using Hive2 Server; if so, your connection string will be jdbc:hive2://your-hadoop-cluster-location:10000. After that, type in your Username and Password and you should be all set.

Using Hive with SQL Workbench

Assuming you have achieved success with the instructions above, you’re now ready to use Hive like any other database. You will be able to submit your Hive code via the Query Window, view your schemas/tables (via the ‘Database Explorer’ functionality which opens in a separate tab) and generally use Hive like any other relational database.

Of course, it’s good to remember that Hive isn’t actually a relational database! From my experience, using Hive via SQL Workbench works pretty well, but the underlying processing is still in Hadoop. So you’re not going to get the clean cancelling of queries like you would with an RDBMS , there can be a significant lag to getting answers back (due to the Hive overhead), you can blow up your computer streaming back results larger than available RAM…but it beats working at the command line.


Real-time Reporting with the Adobe Analytics API

Starting with version 1.3.1 of RSiteCatalyst, you can now access the real-time reporting capabilities of the Adobe Analytics API through a familiar R interface. Here’s how to get started…

GetRealTimeConfiguration

Before using the real-time reporting capabilities of Adobe Analytics, you first need to indicate which metrics and elements you are interested in seeing in real-time. To see which reports are already set up for real-time access on a given report suite, you can use the GetRealTimeConfiguration() function:

1
2
#Get Real-Time reports that already set up
realtime_reports <- GetRealTimeConfiguration("<reportsuite>")

It’s likely the case that the first time you set this up, you’ll already see a real-time report for ‘Instances-Page-Site Section-Referring Domain’. You can leave this report in place, or switch the parameters using SaveRealTimeConfiguration().

SaveRealTimeConfiguration

If you want to add/modify which real-time reports are available in a report suite, you can use the SaveRealTimeConfiguration() function:

1
2
3
4
5
6
7
8
9
10
SaveRealTimeConfiguration("<report suite>",
  metric1 = "instances",
  elements1 = c("page", "referringdomain", "sitesection"),

  metric2 = "revenue",
  elements2 = c("referringdomain", "sitesection")

  metric3 = "orders",
  elements3 = c("products")
)

Up to three real-time reports are available to be stored at any given time. Note that you can mix-and-match what reports you want to modify, you don’t have to submit all three reports at a given time. Finally, keep in mind that it can take up to 15 minutes for the API to incorporate your real-time report changes, so if you don’t get your data right away don’t keep re-submitting the function call!

GetRealTimeReport

Once you have your real-time reports set up in the API, you can use the GetRealTimeReport() function in order to access your reports. There are numerous parameters for customization; selected examples are below.

Minimum Example - Overtime Report

The simplest function call for a real-time report is to create an Overtime report (monitoring a metric over a specific time period):

1
rt <- GetRealTimeReport("<report suite>", "instances")

The result of this call will be a DataFrame having 15 rows of one minute granularity for your metric. This is a great way to monitor real-time orders & revenue during a flash sale, see how users are accessing a landing page for an email marketing campaign or any other metric where you want up-to-the-minute status updates.

Granularity, Offset, Periods

If you want to have a time period other than the last 15 minutes, or one minute granularity is too volatile for the metric you are monitoring, you can add additional arguments to modify the returned DataFrame:

1
2
3
4
5
rt2 <- GetRealTimeReport("<reportsuite>",
                  "instances",
                  periodMinutes = "5",
                  periodCount = "12",
                  periodOffset = "10"

For this function call, we will receive instances for the last hour (12 periods) of five minute granularity, with a 10 minute offset (meaning, now - 10 minutes ago is the first time period reported).

Single Elements

Beyond just monitoring a metric over time, you can specify an element such as page to receive your metrics by:

1
2
3
4
5
GetRealTimeReport("<reportsuite>",
                  "instances",
                  "page",
                  periodMinutes = "9",
                  periodCount = "3")

This function call will return Instances by Page, for the last 27 minutes (3 rows/periods per page, 9 minute granularity…just because!). Additionally, there are other arguments such as algorithm, algorithmArgument, firstRankPeriod and floorSensitivity that allow for creating reports similar to what is provided in the Real-Time tab in the Adobe Analytics interface.

Currently, even through the Adobe Analytics API supports real-time reports with three breakdowns, only one element breakdown is supported by RSiteCatalyst; it is planned to extend these functions in RSiteCatalyst to full support the real-time capabilities in the near future.

From DataFrame to Something ‘Shiny’

If we’re talking real-time reports, we’re probably talking about dashboarding. If we’re talking about R and dashboarding, then naturally, ggvis/Shiny comes to mind. While providing a full ggvis/Shiny example is beyond the scope of this blog post, it’s my hope to provide a working example in a future blog post. Stay tuned!


  • Using RSiteCatalyst With Microsoft PowerBI Desktop
  • RSiteCatalyst Version 1.4.14 Release Notes
  • RSiteCatalyst Version 1.4.13 Release Notes
  • RSiteCatalyst Version 1.4.12 (and 1.4.11) Release Notes
  • Self-Service Adobe Analytics Data Feeds!
  • RSiteCatalyst Version 1.4.10 Release Notes
  • WordPress to Jekyll: A 30x Speedup
  • Bulk Downloading Adobe Analytics Data
  • Adobe Analytics Clickstream Data Feed: Calculations and Outlier Analysis
  • Adobe: Give Credit. You DID NOT Write RSiteCatalyst.
  • RSiteCatalyst Version 1.4.8 Release Notes
  • Adobe Analytics Clickstream Data Feed: Loading To Relational Database
  • Calling RSiteCatalyst From Python
  • RSiteCatalyst Version 1.4.7 (and 1.4.6.) Release Notes
  • RSiteCatalyst Version 1.4.5 Release Notes
  • Getting Started: Adobe Analytics Clickstream Data Feed
  • RSiteCatalyst Version 1.4.4 Release Notes
  • RSiteCatalyst Version 1.4.3 Release Notes
  • RSiteCatalyst Version 1.4.2 Release Notes
  • Destroy Your Data Using Excel With This One Weird Trick!
  • RSiteCatalyst Version 1.4.1 Release Notes
  • Visualizing Website Pathing With Sankey Charts
  • Visualizing Website Structure With Network Graphs
  • RSiteCatalyst Version 1.4 Release Notes
  • Maybe I Don't Really Know R After All
  • Building JSON in R: Three Methods
  • Real-time Reporting with the Adobe Analytics API
  • RSiteCatalyst Version 1.3 Release Notes
  • Adobe Analytics Implementation Documentation in 60 Seconds
  • RSiteCatalyst Version 1.2 Release Notes
  • Clustering Search Keywords Using K-Means Clustering
  • RSiteCatalyst Version 1.1 Release Notes
  • Anomaly Detection Using The Adobe Analytics API
  • (not provided): Using R and the Google Analytics API
  • My Top 20 Least Useful Omniture Reports
  • For Maximum User Understanding, Customize the SiteCatalyst Menu
  • Effect Of Modified Bounce Rate In Google Analytics
  • Adobe Discover 3: First Impressions
  • Using Omniture SiteCatalyst Target Report To Calculate YOY growth
  • ODSC webinar: End-to-End Data Science Without Leaving the GPU
  • PyData NYC 2018: End-to-End Data Science Without Leaving the GPU
  • Data Science Without Leaving the GPU
  • Getting Started With OmniSci, Part 2: Electricity Dataset
  • Getting Started With OmniSci, Part 1: Docker Install and Loading Data
  • Parallelizing Distance Calculations Using A GPU With CUDAnative.jl
  • Building a Data Science Workstation (2017)
  • JuliaCon 2015: Everyday Analytics and Visualization (video)
  • Vega.jl, Rebooted
  • Sessionizing Log Data Using data.table [Follow-up #2]
  • Sessionizing Log Data Using dplyr [Follow-up]
  • Sessionizing Log Data Using SQL
  • Review: Data Science at the Command Line
  • Introducing Twitter.jl
  • Code Refactoring Using Metaprogramming
  • Evaluating BreakoutDetection
  • Creating A Stacked Bar Chart in Seaborn
  • Visualizing Analytics Languages With VennEuler.jl
  • String Interpolation for Fun and Profit
  • Using Julia As A "Glue" Language
  • Five Hard-Won Lessons Using Hive
  • Using SQL Workbench with Apache Hive
  • Getting Started With Hadoop, Final: Analysis Using Hive & Pig
  • Quickly Create Dummy Variables in a Data Frame
  • Using Amazon EC2 with IPython Notebook
  • Adding Line Numbers in IPython/Jupyter Notebooks
  • Fun With Just-In-Time Compiling: Julia, Python, R and pqR
  • Getting Started Using Hadoop, Part 4: Creating Tables With Hive
  • Tabular Data I/O in Julia
  • Hadoop Streaming with Amazon Elastic MapReduce, Python and mrjob
  • A Beginner's Look at Julia
  • Getting Started Using Hadoop, Part 3: Loading Data
  • Innovation Will Never Be At The Push Of A Button
  • Getting Started Using Hadoop, Part 2: Building a Cluster
  • Getting Started Using Hadoop, Part 1: Intro
  • Instructions for Installing & Using R on Amazon EC2
  • Video: SQL Queries in R using sqldf
  • Video: Overlay Histogram in R (Normal, Density, Another Series)
  • Video: R, RStudio, Rcmdr & rattle
  • Getting Started Using R, Part 2: Rcmdr
  • Getting Started Using R, Part 1: RStudio
  • Learning R Has Really Made Me Appreciate SAS