Review: Data Science at the Command Line

Admission: I didn’t really know how computers worked until around 2012.

For the majority of my career, I’ve worked for large companies with centralized IT functions. Like many statisticians, I fell into a comfortable position of learning SAS in a Windows environment, had Ops people to fix any Unix problems I’d run into and DBAs to load data into a relational database environment.

Then I became a consultant at a boutique digital analytics firm. To say I was punching above my weight was an understatement. All of the sudden it was time to go into various companies, have a one-hour kickoff meeting, then start untangling the spaghetti mess that represented their various technology systems. I also needed to figure out the boutique firm’s hacked together AWS and Rackspace infrastructure.

data-science-command-line

I’m starting off this review with this admission, because my story of learning to work from the command line parallels Data Science at the Command Line author Jeroen Janssens:

Around five years ago, during my PhD program, I gradually switched from using Microsoft Windows to GNU/Linux…Out of necessity I quickly became comfortable using the command line. Eventually, as spare time got more precious, I settled down with a GNU/Linux distribution known as Ubuntu…

  • Preface, pg. xi

Because a solid majority of people have never learned anything beyond point-and-click interface (Windows or Mac), the title of the book Data Science at the Command Line is somewhat unfortunate; this is a book for ANYONE looking to start manipulating files efficiently from the command line.

Getting Started, Safely

One of the best parts of Data Science at the Command Line is that it comes with a pre-built virtual machine with 80-100 or more command line tools installed. This is a very fast and safe way to get started with the command line, as the tools are pre-installed and no matter what command you run while you’re learning, you won’t destroy a computer you actually care about!

Chapters 2 and 3 move through the steps of installing the virtual machine, explaining the essential concepts of the command line, some basic commands showing simple (but powerful!) ways to chain command line tools together and how to obtain data. What I find so refreshing about these two chapters by Janssens is that the author assumes zero knowledge of the command line by the reader; these two chapters are the most accessible summary of how and why to use the command line I’ve ever read (Zed Shaw’s CLI tutorial is a close second, but is quite terse).

The OSEMN model

The middle portion of book covers the OSEMN model (Obtain-Scrub-Explore-Model-iNterpret) of data science; another way this book is refreshing is that rather than jump right into machine learning/predictive modeling, the author spends a considerable amount of time covering the gory details of real analysis projects: manipulating data from the format you receive (XML, JSON, sloppy CSV files, etc.) and taking the (numerous) steps required to get the format you want.

By introducing tools such as csvkit (csv manipulation), jq (JSON processor), and classic tools such as sed (stream editor) and (g)awk, the reader gets a full treatment of how to deal with malformed data files (which in my experience are the only type available in the wild!) . Chapter 6 (“Managing Your Data Workflow”) is also a great introduction into reproducible research using Drake (Make for Data Analysis). This is an area that I will personally be focusing my time on, as I tend to run a lot of one-off commands in HDFS and as of now, just copy them into a plain-text file. Reproducing = copy-paste in my case, which defeats the purpose of computers and scripting!

An Idea Can Be Stretched Too Far

Chapters 8 and 9 cover Parallel Processing using GNU Parallel and Modeling Data respectively. While GNU Parallel is a tool I could see using sometime in the future, I do feel like building models and creating visualizations straight from the command line is getting pretty close to just being a parlor trick. Yes, it’s obviously possible to do such things (and the author even wrote his own command line tool Rio for using R from the command line), but with the amount of iteration, feature building and fine-tuning that goes on, I’d rather use IPython Notebook or RStudio to give me the flexibility I need to really iterate effectively.

A Book For Everyone

As I mentioned above, I really feel that Data Science at the Command Line is a book well suited for anyone who does data analysis. Jeroen Janssens has done a fantastic job of taking his original “7 command-line tools for data science” blog post and extending the idea to a full-fledged book. This book has a prominent place in my work library next to Python for Data Analysis and in the past two months I’ve referred to each book at roughly the same rate. For under $30 for paperback at Amazon, there’s more than enough content to make you a better data scientist.


Introducing Twitter.jl

This is possibly the latest “announcement” of a package ever, given that Twitter.jl has existed on METADATA for nearly a year now, but that’s how things go sometimes. Here’s how to get started with Twitter.jl.

Hello, World!

If ‘Hello, World!’ is the canonical example of getting started with a programming language, the Twitter API is becoming the first place to start for people wanting to learn about APIs. Authenticating with the Twitter API using Julia is similar to using the R or Python packages, except that rather than doing the OAuth “dance”, Twitter.jl takes all four authentication values in one function:

1
2
3
4
5
6
7
8
9
10
11
using Twitter

apikey = "q8Qw7WJTVP..."
apisecret = "FIichPpGJxiOssN..."
accesstoken = "98689850-v0zZNr..."
accesstokensecret = "w7bDg9K0c493T..."

twitterauth(apikey,
            apisecret,
            accesstoken,
            accesstokensecret)

All four of these values can be found after registering at the Twitter Developer page and creating an application. Having all four values in your script is less secure than just providing the api key and api secret, but in the future, I’ll likely implement the full OAuth “handshake”. One thing to keep in mind with this function as it currently works is that no validation of your credentials is performed; the only thing this function does is define a global variable twittercred for later use by the various functions that create the OAuth headers. To shout “Hello, World!” to all of your Twitter followers, you can use the following code:

1
post_status_update("Hello, World!")

General Package/Function Structure

From the example above, you can see that the function naming follows the Twitter REST API naming convention, with the HTTP verb first and the endpoint as the remainder of the function name. As such, it’s a good idea at this early package state to have the Twitter documentation open while using this package, so that you can quickly find the methods you are looking for.

For each function/API endpoint, I’ve gone through and determined which parameters are required; these are required arguments in the Julia functions. For all other options, each function takes a second optional Dict{String, String} for any option shown in the Twitter documentation. While this Dict structure allows for ultimate flexibility (and quick definition of functions!), I do realize that it’s less than optimal that you don’t know what optional arguments each Twitter endpoint allows.

As an example, suppose you wanted to search for tweets containing the hashtag #julialang. The minimum function call is as follows:

1
julia_tweets = get_search_tweets("#julialang")

By default, the API will return the 15 most recent tweets containing the #julialang hashtag. To return the most recent 100 tweets (the maximum per API ‘page’), you can pass the “count” parameter via the Options Dict:

julia_tweets_100 = get_search_tweets("#julialang"; options = {"count" => "100"})

Composite Types and DataFrames definitions

The Twitter API is structured into 4 return data types (Places, Users, Tweets, and Entities), and I’ve mimicked these types using Julia Composite Types. As such, most functions in Twitter.jl return an array of specific type, such as Array{TWEETS,1} from the prior #julialang search example. The benefit to defining custom types for the returned Twitter data is that rudimentary DataFrame methods have also been defined:

1
df = DataFrame(julia_tweets_100)

I describe these DataFrames as ‘rudimentary’ as they parse the top level of JSON into columns, which results in some DataFrame columns having complex data types such as Dict() (and within the Dict(), nested Dicts!). As a running theme in this post, this is something I hope to get around to improving in the future.

Want to Get Started Developing Julia? Start Here!

One of the common questions I get asked is how to get started with Julia, both from a learning perspective and from a package development perspective. Hacking away on the core Julia codebase is great if you have the ability, but the code can certainly be intimidating (the people are quite friendly though). Creating a package isn’t necessarily hard, but you have to think about an idea you want to implement. The third alternative is…

…improve the Twitter package! If you go to the GitHub page for Twitter.jl, you’ll see a long list of TODO items that need to be worked on. The hardest part (building the OAuth headers) has already been taken care of. What’s left is re-factoring the code for simplification, factoring out the OAuth code in general into a new Julia library (also partially started), then building the Streaming API functions, cleaning up the DataFrame methods to remove the Dict column types, paging through API results…and so-on.

So if any of you are on the sidelines wanting to get some practice on developing packages, without needing to worry about learning Astrophysics first, I’d love to collaborate. And if any Julia programming masters want to collaborate, well that’s great too. All help and pull requests are welcomed.

In the meantime, hopefully some of you will find this package useful for natural language processing, social networking analysis or even creating bots 😉


RSiteCatalyst Version 1.4.2 Release Notes

RSiteCatalyst version 1.4.2 is now available on CRAN. This update was primarily bug fixes with one additional feature added.

  1. Fixed QueueRanked function to allow multiple SAINT classifications to be specified. This allows for breaking down a SAINT classification with another SAINT classification, such as breaking down tracking codes by marketing channel and by campaign
  2. Fixed bug in internal function, to allow for using the same element multiple times in a QueueRanked function call. This was a necessary fix for allowing multiple SAINT classifications in #1
  3. Exported previous internal function SubmitJsonQueueReport to allow for submitting JSON requests directly to the Adobe Analytics API without all of the R function scaffolding. This approximates the same functionality as the Adobe API Explorer.

For the most part, this isn’t a release that most people will notice any differences from version 1.4.1. That said, special thanks go out to Jason Morgan (@framingeinstein) for identifying the two bugs that were fixed AND submitting fixes.

Feature Requests/Bugs

As always, if you come across bugs or have feature requests, please continue to use the RSiteCatalyst GitHub Issues page to submit issues. Don’t worry about cluttering up the page with tickets, please fill out a new issue for anything you encounter (with code you’ve already tried and is failing), unless you are SURE that it is the same problem someone else is facing.

And finally, like I end every blog post about RSiteCatalyst, please note that I’m not an Adobe employee. Please don’t send me your API credentials, expect immediate replies (especially for you e-commerce folks sweating the holiday season!) or ask to set up phone calls to troubleshoot your problems. This is open-source software…Willem Paling and I did the hard part writing it, you’re expected to support yourself as best as possible unless you believe you’re encountering a bug. Then use GitHub.


  • RSiteCatalyst Version 1.4.16 Release Notes
  • Using RSiteCatalyst With Microsoft PowerBI Desktop
  • RSiteCatalyst Version 1.4.14 Release Notes
  • RSiteCatalyst Version 1.4.13 Release Notes
  • RSiteCatalyst Version 1.4.12 (and 1.4.11) Release Notes
  • Self-Service Adobe Analytics Data Feeds!
  • RSiteCatalyst Version 1.4.10 Release Notes
  • WordPress to Jekyll: A 30x Speedup
  • Bulk Downloading Adobe Analytics Data
  • Adobe Analytics Clickstream Data Feed: Calculations and Outlier Analysis
  • Adobe: Give Credit. You DID NOT Write RSiteCatalyst.
  • RSiteCatalyst Version 1.4.8 Release Notes
  • Adobe Analytics Clickstream Data Feed: Loading To Relational Database
  • Calling RSiteCatalyst From Python
  • RSiteCatalyst Version 1.4.7 (and 1.4.6.) Release Notes
  • RSiteCatalyst Version 1.4.5 Release Notes
  • Getting Started: Adobe Analytics Clickstream Data Feed
  • RSiteCatalyst Version 1.4.4 Release Notes
  • RSiteCatalyst Version 1.4.3 Release Notes
  • RSiteCatalyst Version 1.4.2 Release Notes
  • Destroy Your Data Using Excel With This One Weird Trick!
  • RSiteCatalyst Version 1.4.1 Release Notes
  • Visualizing Website Pathing With Sankey Charts
  • Visualizing Website Structure With Network Graphs
  • RSiteCatalyst Version 1.4 Release Notes
  • Maybe I Don't Really Know R After All
  • Building JSON in R: Three Methods
  • Real-time Reporting with the Adobe Analytics API
  • RSiteCatalyst Version 1.3 Release Notes
  • Adobe Analytics Implementation Documentation in 60 Seconds
  • RSiteCatalyst Version 1.2 Release Notes
  • Clustering Search Keywords Using K-Means Clustering
  • RSiteCatalyst Version 1.1 Release Notes
  • Anomaly Detection Using The Adobe Analytics API
  • (not provided): Using R and the Google Analytics API
  • My Top 20 Least Useful Omniture Reports
  • For Maximum User Understanding, Customize the SiteCatalyst Menu
  • Effect Of Modified Bounce Rate In Google Analytics
  • Adobe Discover 3: First Impressions
  • Using Omniture SiteCatalyst Target Report To Calculate YOY growth
  • ODSC webinar: End-to-End Data Science Without Leaving the GPU
  • PyData NYC 2018: End-to-End Data Science Without Leaving the GPU
  • Data Science Without Leaving the GPU
  • Getting Started With OmniSci, Part 2: Electricity Dataset
  • Getting Started With OmniSci, Part 1: Docker Install and Loading Data
  • Parallelizing Distance Calculations Using A GPU With CUDAnative.jl
  • Building a Data Science Workstation (2017)
  • JuliaCon 2015: Everyday Analytics and Visualization (video)
  • Vega.jl, Rebooted
  • Sessionizing Log Data Using data.table [Follow-up #2]
  • Sessionizing Log Data Using dplyr [Follow-up]
  • Sessionizing Log Data Using SQL
  • Review: Data Science at the Command Line
  • Introducing Twitter.jl
  • Code Refactoring Using Metaprogramming
  • Evaluating BreakoutDetection
  • Creating A Stacked Bar Chart in Seaborn
  • Visualizing Analytics Languages With VennEuler.jl
  • String Interpolation for Fun and Profit
  • Using Julia As A "Glue" Language
  • Five Hard-Won Lessons Using Hive
  • Using SQL Workbench with Apache Hive
  • Getting Started With Hadoop, Final: Analysis Using Hive & Pig
  • Quickly Create Dummy Variables in a Data Frame
  • Using Amazon EC2 with IPython Notebook
  • Adding Line Numbers in IPython/Jupyter Notebooks
  • Fun With Just-In-Time Compiling: Julia, Python, R and pqR
  • Getting Started Using Hadoop, Part 4: Creating Tables With Hive
  • Tabular Data I/O in Julia
  • Hadoop Streaming with Amazon Elastic MapReduce, Python and mrjob
  • A Beginner's Look at Julia
  • Getting Started Using Hadoop, Part 3: Loading Data
  • Innovation Will Never Be At The Push Of A Button
  • Getting Started Using Hadoop, Part 2: Building a Cluster
  • Getting Started Using Hadoop, Part 1: Intro
  • Instructions for Installing & Using R on Amazon EC2
  • Video: SQL Queries in R using sqldf
  • Video: Overlay Histogram in R (Normal, Density, Another Series)
  • Video: R, RStudio, Rcmdr & rattle
  • Getting Started Using R, Part 2: Rcmdr
  • Getting Started Using R, Part 1: RStudio
  • Learning R Has Really Made Me Appreciate SAS