Recently I’ve been doing a fair bit of work with geospatial data, mostly on the data preparation side. While there are common data formats, I have found that because so much of this data are sourced from government agencies, the data are often in many files that can be concatenated.
In this example, I will show how to take a few dozen county-level shapefiles of parcel data from Utah and load it into a single table in Postgres/Postgis.
Step 1: Downloading Shapefiles
The following shell commands come from an in-progress collaboration with a friend, where we are going to analyze daily air quality in Utah over the past several years. Utah is open-data-friendly, providing shapefiles for every parcel of land in Utah.
While it may have been possible to use wget
or curl
to download every shapefile, they are stored within Google Drive with a bunch of hashed URLs, so I just clicked on each file instead of trying to be clever. So if you want to follow along with this blog post exactly, you’ll need to download the 25 zip files of Utah shapefiles:
Step 2: Bulk Unzip
With all of these files in the same directory at the same level (i.e. no subfolders), it’s pretty easy to bulk unzip the files, with one caveat: to move the contents of the unzipped files into a new directory, you need to use the -d
flag:
The reason I created a new directory (mkdir
) and then unzipped the files into a new directory is that when doing analysis, I always like to keep the source data separate, so that I always have the option of starting completely over. It also can make regular expression globs easier :)
Step 3: Creating Postgis Table Definition
After all of the county zip files are unzipped, you get 25 sub-directories structured like the following:
The .shp files from the 25 counties all have the same format, which is very convenient. In this step, we can use the shp2pgsql
utility that comes with Postgis to read a shapefile, determine the proper schema, then create the table in the database:
The key flag here is -p
, which means ‘prepare mode’; the shapefile will get read, a table created, but no data loaded. By not loading the data in this step, it makes looping over the files easier later, as no special logic is required to keep the Parcels_Beaver_LIR.shp
from being duplicated in Postgis (because it was never loaded in the first place).
Step 4: Bulk Loading Shapefiles into Postgis
The last steps of the loading process are to 1) get all of the shapefile locations and 2) feed them to shp2pgsql:
To get all of the shapefile locations, I use find
with flags -type f
(files type) and name
to search for the pattern within the directory. This command goes through the entire set of subdirectories and gets all the .shp
files. From there, I iterate over the list of files using for i in...
, then pass the value of $i
into a similar shp2pgsql
as above. However, rather than using flag -p
for ‘prepare’, we are now going to use flag -a
for ‘append’. This will perform an INSERT INTO utahlirparcels()
statement for Postgres, loading in the actual data from the 25 shapefiles.
Spend Time Now To Save Time Later
Like so much of shell scripting, figuring out these commands took longer than I would’ve expected. Certainly, they took longer to figure out than it would’ve taken to copy-paste a shp2pgsql
25 times! But by taking the time upfront to figure out a generic method of looping over shapefiles, the next time (and every time after that) I find myself needing to do this, this code will be available to load multiple shapefiles into Postgis.