With pretty regular frequency I get emails asking if RSiteCatalyst can be used with Microsoft Power BI. While admittedly I’m not a frequent user of the Windows operating system (nor dashboarding tools like Tableau or Power BI), I am pleased to report that it is fact possible to call the Adobe Analytics API with Power BI via RSiteCatalyst!
Step 1: Call Adobe Analytics API Using Get Data Menu
The majority of getting RSiteCatalyst to work within Power BI desktop is getting the R script correct. From the Get Data menu, choose the More... menu option to bring up all of the data import tools that Power BI defines:
Once you choose ‘R Script’, an input box will open where you can place your RSiteCatalyst function calls:
After hitting ‘OK’, Power BI will evaluate your R code, determining which statements return a data.frame (which is the only allowable data structure imported into Power BI). You can choose which data.frame(s) you want to import from the ‘Navigator’ window:
Once you hit ‘OK’, Power BI imports the data and you can use your Adobe Analytics data just as you would in R with RSiteCatalyst (or, like any other data source like CSV or database…)
Limitations
While it’s possible to call RSiteCatalyst through Power BI, there are some limitations to keep in mind.
First, RSiteCatalyst will only work with Microsoft Power BI Desktop, which is installed locally on your machine. The Power BI Service, which is more of a shared dashboard/data store environment, does not allow external API calls as part of its security model. So while you can analyze your data locally, you cannot share dashboards to the Power BI Service.
The second limitation I’ve noticed is that Power BI doesn’t read from from a .Renviron file (at least, not from the default Windows location that R GUI reads). So you will need to place your credentials directly in the R script, which is never really ideal (though, may not be a big deal all things considered).
Finally, the R script runs synchronously, so when placing multiple calls in the same R script you will need to wait for all of the data.frame results before you can use them within Power BI. This is the same default behavior within R, sans using Promises or parallelism of some sort, but it’s still important to keep in mind.
Dashboards, Dashboards, Dashboards!
With a few minutes work, I was able to create this rudimentary dashboard (R code):
Someone with more interesting/higher volume data could surely do better. But the most important thing in my opinion is that Microsoft has built an awesome integration with R and that creating dashboards in Power BI is waaaaaay easier than the last time I tried to create a dashboard using Excel and the Adobe Report Builder plugin.
Edit 10/1/2018: When I wrote this blog post, the company and product were named MapD. I’ve changed the title to reflect the new company name, but left the MapD references below to hopefully avoid confusion
In my previous MapD post, I loaded electricity data into MapD Community Edition, intentionally ignoring the what of the data to keep that post from being too overwhelming. Now let’s take a step back and explain the dataset, show how to format the data using Python that was loaded into MapD, then use the MapD Immerse UI to build a simple dashboard.
PJM Metered Load Data
I started off my career at PJM doing long-term electricity demand forecasting, to help power engineers do transmission line studies for reliability and to support expansion of the electrical grid in the U.S. Because PJM is a quasi-government agency, they provide over 25 years of hourly electricity usage for the Eastern and Central U.S., both in aggregate and by local power region (roughly, the local power company territories).
However, just because the data is available doesn’t mean it’s convenient, and unfortunately, the data are stored as Excel spreadsheets. This is easily remedied using pandas (v0.22.0, python3.6):
importosimportpandasaspd#change to directory with files for convenience
os.chdir("~/electricity_data")#first sheet in workbook contains all info for years 1993-1999
df1993_1999=[pd.read_excel(str(x)+"-hourly-loads.xls",usecols="A:Z")forxinrange(1993,1999)]#melt, append df1993-df1999 together
df_melted=pd.DataFrame()forxindf1993_1999:x.columns=df1993_1999[1].columns.tolist()x_melt=pd.melt(x,id_vars=['ACTUAL_DATE','ZONE_NAME'],var_name="HOUR_ENDING",value_name="MW")df_melted=df_melted.append(x_melt)#multiple sheets to concatenate
#too much variation for a one-liner
d2000=pd.read_excel("2000-hourly-loads.xls",sheet_name=[xforxinrange(2,17)],usecols="A:Z")d2001=pd.read_excel("2001-hourly-loads.xls",sheet_name=None,usecols="A:Z")d2002=pd.read_excel("2002-hourly-loads.xls",sheet_name=[xforxinrange(1,18)],usecols="A:Z")d2003=pd.read_excel("2003-hourly-loads.xls",sheet_name=[xforxinrange(1,19)],usecols="A:Z")d2004=pd.read_excel("2004-hourly-loads.xls",sheet_name=[xforxinrange(2,24)],usecols="A:Z")d2005=pd.read_excel("2005-hourly-loads.xls",sheet_name=[xforxinrange(2,27)],usecols="A:Z")d2006=pd.read_excel("2006-hourly-loads.xls",sheet_name=[xforxinrange(3,29)],usecols="A:Z")d2007=pd.read_excel("2007-hourly-loads.xls",sheet_name=[xforxinrange(3,29)],usecols="A:Z")d2008=pd.read_excel("2008-hourly-loads.xls",sheet_name=[xforxinrange(3,29)],usecols="A:Z")d2009=pd.read_excel("2009-hourly-loads.xls",sheet_name=[xforxinrange(3,29)],usecols="A:Z")d2010=pd.read_excel("2010-hourly-loads.xls",sheet_name=[xforxinrange(3,29)],usecols="A:Z")d2011=pd.read_excel("2011-hourly-loads.xls",sheet_name=[xforxinrange(3,32)],usecols="A:Z")d2012=pd.read_excel("2012-hourly-loads.xls",sheet_name=[xforxinrange(3,33)],usecols="A:Z")d2013=pd.read_excel("2013-hourly-loads.xls",sheet_name=[xforxinrange(3,34)],usecols="A:Z")d2014=pd.read_excel("2014-hourly-loads.xls",sheet_name=[xforxinrange(3,34)],usecols="A:Z")d2015=pd.read_excel("2015-hourly-loads.xls",sheet_name=[xforxinrange(3,40)],usecols="B:AA")d2016=pd.read_excel("2016-hourly-loads.xls",sheet_name=[xforxinrange(3,40)],usecols="B:AA")d2017=pd.read_excel("2017-hourly-loads.xls",sheet_name=[xforxinrange(3,42)],usecols="B:AA")d2018=pd.read_excel("2018-hourly-loads.xls",sheet_name=[xforxinrange(3,40)],usecols="B:AA")#loop over dataframes, read in matrix-formatted data, melt to normalized form
forordin[d2000,d2001,d2002,d2003,d2004,d2005,d2006,d2007,d2008,d2009,d2010,d2011,d2012,d2013,d2014,d2015,d2016,d2017,d2018]:forkeyinord:temp=ord[key]temp.columns=df1993_1999[1].columns.tolist()#standardize column names
temp["ACTUAL_DATE"]=pd.to_datetime(temp["ACTUAL_DATE"])#force datetime, excel reader wonky
df_melted=df_melted.append(pd.melt(temp,id_vars=['ACTUAL_DATE','ZONE_NAME'],var_name="HOUR_ENDING",value_name="MW"))#(4941384, 4)
#130MB as CSV
#remove any dates that are null, artifacts from excel reader
df_melted[pd.notnull(df_melted["ACTUAL_DATE"])].to_csv("hourly_loads.csv",index=False)
The code is a bit verbose, if only because I didn’t want to spend time to figure out how to programmatically determine how many tabs each workbook has. But the concept is the same each time: read an Excel file, get the data into a dataframe, then convert the data to long form. So instead of having 26 columns (Date, Zone, Hr1-Hr24), we have 4 columns, which is quite frequently a more convenient way to access the data (especially when using SQL).
The final statement writes out a CSV of approximately 4MM rows, the same dataset that was loaded using mapdql in the first post.
Top 10 Usage Days By Season
One of the metrics I used to monitor as part of my job was the top 5/top 10 peak electricity use days per Summer (high A/C usage) and Winter (electric space heating) seasons. Back in those days, I used to use SAS against an enterprise database and the results would come back eventually…
Obviously, it’s not a fair comparison to compare today’s GPUs vs. late ’90s enterprise databases in terms of performance, but back then it did take a non-trivial amount of effort to run this query to keep the report updated. With MapD, I can do the same report in ~100ms:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
--MapD doesn't currently support window functions, so need to precalculate maximum by daywithqryas(selectactual_date,zone_name,max(MW)asdaily_max_usagefromhourly_loadswherezone_name='MIDATL'andactual_datebetween'2017-06-01'and'2017-09-30'groupby1,2)selecthl.actual_date,hl.zone_name,hl.hour_ending,hl.MWfromhourly_loadsashlinnerjoinqryonqry.actual_date=hl.actual_dateandqry.daily_max_usage=hl.mworderbydaily_max_usagedesclimit10;
The thing about returning an answer in 100ms or so is that its fast enough where calling these results from a webpage/dashboard would be very responsive; that’s where MapD Immerse comes in.
Building A Dashboard Using MapD Immerse
Rather than copy/pasting the query in and running it, it’s pretty easy to build an automated report using the Immerse dashboard builder. I’m limited to a single data source because I’m using MapD Community Edition, but in just a few minutes I was able to create the following dashboard:
I took the query from above and built a view to encapsulate the query, so I didn’t have to worry about the with statement or joins, I could just use the view as if the results were pre-calculated. From there, adding in a results table and two bar charts was fairly quick (in the same drag-and-drop style of Tableau or other BI/reporting tools).
While this dashboard is pretty rudimentary in its design, were this data source set up as real-time using Apache Kafka or similar, this chart would always be up-to-date for use on a TV screen or as a browser bookmark without any additional data or web engineering.
Obviously, many dashboarding tools exist, but its important to note that no pre-aggregation or column indexing or other standard database performance tricks are being employed (outside of specialized hardware and fast GPU RAM caching). Even with 10 dashboard tiles updating serially 100ms at a time, you are still in the 1-2s page load time, on par with the fastest-loading dynamic webpages on the internet.
Programmatic analytics using pymapd
While dashboarding can be very effective for keeping senior management up-to-date, the real value of data is unlocked with more in-depth analytics and segmentation. In my next blog post, I’ll cover how to access MapD using pymapd in Python, doing more advanced visualizations and maybe even some machine learning…
Like the last several updates, this blog post will be fairly short, given only a single bug fix was added.
Thanks again to GitHub user leocwlau who reported that the GetReportSuiteGroups function added an additional field AND provided the solution. No other bug fixes were made, nor was any additional functionality added.
Version 1.4.14 of RSiteCatalyst was submitted to CRAN today and should be available for download in the coming days.
Community Contributions
As I’ve mentioned in many a blog post before this one, I encourage all users of the software to continue reporting bugs via GitHub issues, and especially if you can provide a working code example. Even better, a fix via pull request will ensure that your bug will be addressed in a timely manner and for the benefit to others in the community.
Note: Please don’t email directly via the email in the RSiteCatalyst package, it will not be returned. Having a valid email contact in the package is a requirement to have a package listed on CRAN so they can contact the package author, it is not meant to imply I can/will provide endless, personalized support for free.