Projecting spatial data in Python and R

I use my blog as sort of a scholarly notebook. I often repeatedly do a task, and then can’t find where I did it previously. One example is projecting crime data, so here are my notes on how to do that in python and R.

Commonly I want to take public crime data that is in spherical lat/lon coordinates and project it to some local projection. Most of the time so I can do simply euclidean geometry (like buffers within X feet, or distance to the nearest crime generator in meters). Sometimes you need to do the opposite — if I have the projected data and I want to plot the points on a webmap it is easier to work with the lat/lon coordinates. As a note, if you import your map data and then your points are not on the map (or in a way off location), there is some sort of problem with the projection.

I used to do this in ArcMap (toolbox -> Data Management -> Projections), but doing it these programs are faster. Here are examples of going back and forth for some Dallas coordinates. Here is the data and code to replicate the post.

Python

In python there is a library pyproj that does all the work you need. It isn’t part of the default python packages, so you will need to install it using pip or whatever. Basically you just need to define the to/from projections you want. Also it always returns the projected coordinates in meters, so if you want feet you need to do a conversions from meters to feet (or whatever unit you want). For below p1 is the definition you want for lat/lon in webmaps (which is not a projection at all). To figure out your local projection though takes a little more work.

To figure out your local projection I typically use this online tool, prj2epsg. You can upload a prj file, which is the locally defined projection file for shapefiles. (It is plain text as well, so you can just open in a text editor and paste into that site as well.) It will then tell you want EPSG code corresponds to your projection.

Below illustrates putting it all together and going back and forth for an example area in Dallas. I tend to write the functions to take one record at a time for use in various workflows, but I am sure someone can write a vectorized version though that will take whole lists that is a better approach.

import pyproj

#These functions convert to/from Dallas projection
#In feet to lat/lon
p1 = pyproj.Proj(proj='latlong',datum='WGS84')
p2 = pyproj.Proj(init='epsg:2276') #show how to figure this out, http://spatialreference.org/ref/epsg/ and http://prj2epsg.org/search 
met_to_feet = 3.280839895 #http://www.meters-to-feet.com/

#This converts Lat/Lon to projected coordinates
def DallConvProj(Lat,Lon):
    #always returns in meters
    if abs(Lat) > 180 or abs(Lon) > 180:
        return (None,None)
    else:
        x,y = pyproj.transform(p1, p2, Lon, Lat)
        return (x*met_to_feet, y*met_to_feet)

#This does the opposite, coverts projected to lat/lon
def DallConvSph(X,Y):
    if abs(X) < 2000000 or abs(Y) < 6000000:
        return (None,None)
    else:
        Lon,Lat = pyproj.transform(p2, p1, X/met_to_feet, Y/met_to_feet)
        return (Lon, Lat)

#check coordinates
x1 = -96.828295; y1 = 32.832521
print DallConvProj(Lat=y1,Lon=x1)

x2 = 2481939.934525765; y2 = 6989916.200679892
print DallConvSph(X=x2, Y=y2)

R

In R I use the library proj4 to do the projections for point data. R can read in the projection data from a file as well using the rgdal library.

library(proj4)
library(rgdal)

#read in projection from shapefile
MyDir <- "C:\\Users\\axw161530\\Dropbox\\Documents\\BLOG\\Projections_R_Python"
setwd(MyDir)
DalBound <- readOGR(dsn="DallasBoundary_Proj.shp",layer="DallasBoundary_Proj")
DalProj <- proj4string(DalBound)    

ProjData <- data.frame(x=c(2481939.934525765),
                       y=c(6989916.200679892),
                       lat=c(32.832521),
                       lon=c(-96.828295))
       
LatLon <- proj4::project(as.matrix(ProjData[,c('x','y')]), proj=DalProj, inverse=TRUE)
#check to see if true
cbind(ProjData[,c('lon','lat')],as.data.frame(LatLon))

XYFeet <- proj4::project(as.matrix(ProjData[,c('lon','lat')]), proj=DalProj)
cbind(ProjData[,c('x','y')],XYFeet)    

plot(DalBound)
points(ProjData$x,ProjData$y,col='red',pch=19,cex=2)

The last plot function shows that the XY point is within the Dallas basemap for the projected boundary. But if you want to project the boundary file as well, you can use the spTransform function. Here I have a simple example of tacking the projected boundary file and transforming to lat/lon, so can be superimposed on a leaflet map.

Additionally I show a trick I sometimes use for maps by transforming the boundary polygon to a polyline, as it provides easier styling options sometimes.

#transform boundary to lat/lon
DalLatLon <- spTransform(DalBound,CRS("+init=epsg:4326") )
plot(DalLatLon)
points(ProjData$lon,ProjData$lat,col='red',pch=19,cex=2)

#Leaflet useful for boundaries to be lines instead of areas
DallLine <- as(DalLatLon, 'SpatialLines')
library(leaflet)

BaseMapDallas <- leaflet() %>%
  addProviderTiles(providers$OpenStreetMap, group = "Open Street Map") %>%
  addProviderTiles(providers$CartoDB.Positron, group = "CartoDB Lite") %>%
  addPolylines(data=DallLine, color='black', weight=4, group="Dallas Boundary Lines") %>%
  addPolygons(data=DalLatLon,color = "#1717A1", weight = 1, smoothFactor = 0.5,
              opacity = 1.0, fillOpacity = 0.5, group="Dallas Boundary Area") %>%
  addLayersControl(baseGroups = c("Open Street Map","CartoDB Lite"),
                   overlayGroups = c("Dallas Boundary Area","Dallas Boundary Lines"),
                   options = layersControlOptions(collapsed = FALSE)) %>%
                   hideGroup("Dallas Boundary Lines")   
                      
BaseMapDallas

I have too much stuff in the blog queue at the moment, but hopefully I get some time to write up my notes on using leaflet maps in R soon.

Advertisements

American Community Survey Variables of Interest to Criminologists

I’ve written prior blog posts about downloading Five Year American Community Survey data estimates (ACS for short) for small area geographies, but one of the main hiccups is figuring out what variables you want to use. The census has so many variables that are just small iterations of one another (e.g. Males under 5, males 5 to 9, males 10 to 14, etc.) that it is quite a chore to specify the ones you want. Often you want combinations of variables or to calculate percentages as well, so you need to take two or more variables and turn them into your constructed variable.

I have posted some notes on the variables I have used for past projects in an excel spreadsheet. This includes the original variables, as well as some notes for creating percentage variables. Some are tricky — such as figuring out the proportion of black residents for block groups you need to add non-Hispanic black and Hispanic black estimates (and then divide by the total population). For spatially oriented criminologists these are basically indicators commonly used for social disorganization. It also includes notes on what is available at the smaller block group level, as not all of the variables are. So you are more limited in your choices if you want that small of area.

Let me know if you have been using other variables for your work. I’m not an expert on these variables by any stretch, so don’t take my list as authoritative in any way. For example I have no idea whether it is valid to use the imputed data for moving in the prior year at the block group level. (In general I have not incorporated the estimates of uncertainty for any of the variables into my analyses, not sure of the additional implications for the imputed data tables.) Also I have not incorporated variables that could be used for income-inequality or for ethnic heterogeneity (besides using white/black/Hispanic to calculate the index). I’m sure there are other social disorganization relevant variables at the block group level folks may be interested in as well. So let me know in the comments or shoot me an email if you have suggestions to update my list.

I would prefer if as a field we could create a set of standardized indices so we are not all using different variables (see for example this Jeremy Miles paper). It is a bit hodge-podge though what variables folks use from study-to-study, and most folks don’t report the original variables so it is hard to replicate their work exactly. British folks have their index of deprivation, and it would be nice to have a similarly standardized measure to use in social science research for the states.


The ACS data has consistent variable names over the years, such as B03001_001 is the total population, B03002_003 is the Non-Hispanic white population, etc. Unfortunately those variables are not necessarily in the same tables from year to year, so concatenating ACS results over multiple years is a bit of a pain. Below I post a python script that given a directory of the excel template files will produce a nice set of dictionaries to help find what table particular variables are in.

#This python code grabs ACS meta-data templates
#To easier search for tables that have particular variables
import xlrd, os

mydir = r'!!!Insert your path to the excel files here!!!!!'

def acs_vars(directory):
    #get the excel files in the directory
    excel_files = []
    for file in os.listdir(directory):
        if file.endswith(".xls"):
            excel_files.append( os.path.join(directory, file) )
    #getting the variables in a nice dictionaries
    lab_dict = {}
    loc_dict = {}
    for file in excel_files:
        book = xlrd.open_workbook(file) #first open the xls workbook
        sh = book.sheet_by_index(0)
        vars = [i.value for i in sh.row(0)] #names on the first row
        labs = [i.value for i in sh.row(1)] #labels on the second
        #now add to the overall dictionary
        for v,l in zip(vars,labs):
            lab_dict[v] = l
            loc_dict[v] = file
    #returning the two dictionaries
    return lab_dict,loc_dict
    
labels,tables = acs_vars(mydir)

#now if you have a list of variables you want, you can figure out the table
interest = ['B03001_001','B02001_005','B07001_017','B99072_001','B99072_007',
            'B11003_016','B14006_002','B01001_003','B23025_005','B22010_002',
            'B16002_004']
            
for i in interest:
    head, tail = os.path.split(tables[i])
    print (i,labels[i],tail)

Drawing Google Streetview images down an entire street using python

I’ve previously written about grabbing Google Streetview images given a particular address. For a different project I sampled images running along an entire street, so figured I would share that code. It is a bit more complicated though, because when you base it off an address you do not need to worry about drawing the same image twice. So I will walk through an example.

So first we will import the necessary libraries we are using, then will globally define your user key and the download folder you want to save the streetview images into.

#Upfront stuff you need
import urllib, os, json
key = "&key=" + "!!!!!!!!!!!!!YourAPIHere!!!!!!!!!!!!!!!!"
DownLoc = r'!!!!!!!!!!!YourFileLocationHere!!!!!!!!!!!!!!'  

Second are a few functions. The first, MetaParse, grabs the date (Month and Year) and pano_id from a particular street view image. Because if you submit just a slightly different set of lat-lon, google will just download the same image again. To prevent that, we do a sort of memoization, where we grab the meta-data first, stuff it in a global list PrevImage. Then if you have already downloaded that image once, the second GetStreetLL function will not download it again, as it checks the PrevImage list. If you are doing a ton of images you may limit the size of PrevImage to a certain amount, but it is no problem doing a few thousand images as is. (With a free account you can IIRC get 25,000 images in a day, but the meta-queries count against that as well.)

def MetaParse(MetaUrl):
    response = urllib.urlopen(MetaUrl)
    jsonRaw = response.read()
    jsonData = json.loads(jsonRaw)
    #return jsonData
    if jsonData['status'] == "OK":
        if 'date' in jsonData:
            return (jsonData['date'],jsonData['pano_id']) #sometimes it does not have a date!
        else:
            return (None,jsonData['pano_id'])
    else:
        return (None,None)

PrevImage = [] #Global list that has previous images sampled, memoization kindof        
        
def GetStreetLL(Lat,Lon,Head,File,SaveLoc):
    base = r"https://maps.googleapis.com/maps/api/streetview"
    size = r"?size=1200x800&fov=60&location="
    end = str(Lat) + "," + str(Lon) + "&heading=" + str(Head) + key
    MyUrl = base + mid + end
    fi = File + ".jpg"
    MetaUrl = base + r"/metadata" + size + end
    #print MyUrl, MetaUrl #can check out image in browser to adjust size, fov to needs
    met_lis = list(MetaParse(MetaUrl))                           #does not grab image if no date
    if (met_lis[1],Head) not in PrevImage and met_lis[0] is not None:   #PrevImage is global list
        urllib.urlretrieve(MyUrl, os.path.join(SaveLoc,fi))
        met_lis.append(fi)
        PrevImage.append((met_lis[1],Head)) #append new Pano ID to list of images
    else:
        met_lis.append(None)
    return met_lis  

Now we are ready to download images running along an entire street. To get the necessary coordinates and header information I worked it out in a GIS. Using a street centerline file I regularly sampled along the streets. Based on those sample points then you can calculate a local trajectory of the street, and then based on that trajectory turn the camera how you want it. Most social science folks I imagine want it to look at the sidewalk, so then you will calculate 90 degrees to the orientation of the street.

Using trial and error I found that spacing the samples around 40 feet apart tended to get a new image. I have the pixel size and fov parameters to the streetview api hard set in the function, but you could easily amend the function to take those as arguments as well.

So next I have an example list of tuples with lat-lon’s and orientation. Then I just loop over those sample locations and draw the images. Here I also have another list image_list, that contains what I save the images too, as well as saves the pano-id and the date meta data.

DataList = [(40.7036043470179800,-74.0143908501053400,97.00),
            (40.7037139540670900,-74.0143727485309500,97.00),
            (40.7038235569946140,-74.0143546472568100,97.00),
            (40.7039329592712600,-74.0143365794219800,97.00),
            (40.7040422704154500,-74.0143185262956300,97.00),
            (40.7041517813782500,-74.0143004403322000,97.00),
            (40.7042611636045350,-74.0142823755611700,97.00),
            (40.7043707615693800,-74.0142642750708300,97.00)]

    
image_list = [] #to stuff the resulting meta-data for images
ct = 0
for i in DataList:
    ct += 1
    fi = "Image_" + str(ct)
    temp = GetStreetLL(Lat=i[0],Lon=i[1],Head=i[2],File=fi,SaveLoc=DownLoc)
    if temp[2] is not None:
        image_list.append(temp)

I have posted the entire python code snippet here. If you want to see the end result, you can check out the photo album. Below is one example image out of the 8 in that street segment, but when viewing the whole album you can see how it runs along the entire street.

Still one of the limitations of this is that there is no easy way to draw older images that I can tell — doing this approach you just get the most recent image. You need to know the pano-id to query older images. Preferably the meta data json should contain multiple entries, but that is not the case. Let me know if there is a way to amend this to grab older imagery or imagery over time. Here is a great example from Kyle Walker showing changes over time in Detroit.

Paper published: The effect of housing demolitions on crime in Buffalo, New York

I have a new paper published with a few of my colleagues up in Buffalo, Dae-Young Kim and Scott Phillips. This work looks at the crime reduction effects of widespread demolitions in Buffalo, is titled The Effect of Housing Demolitions on Crime in Buffalo, New York, and was published at the Journal of Research in Crime & Delinquency. In short, at the micro level there is very strong evidence that demolitions reduce crime — the neighborhood level the evidence is not as strong. This is likely partly due to the neighborhood level analysis being underpowered, as several of the estimates between the two are very similar overall.

If you cannot get access to that published article, you can always send me an email for a copy, or you can download a pre-print version from SSRN.

Below is one of the images from the paper, a set of small-multiple maps showing demographic characteristics of Buffalo census tracts:

Someone could surely replicate this micro level result in other cities that have experienced widespread demolitions (like Detroit). But for long term city planners I would consider more rigorous designs that incorporate not only selective demolition, but other neighborhood investment strategies to improve neighborhoods over long term. That is, this research is good evidence of the near-term crime reduction effects of demolitions, but for the long haul leaving empty lots is not going to greatly improve neighborhoods.

New working paper: Mapping attitudes towards the police at micro places

I have a new preprint posted, Mapping attitudes towards the police at micro places. This is work with Jasmine Silver, as well as Rob Worden and Sarah McLean. See the abstract:

We demonstrate the utility of mapping community satisfaction with the police at micro places using data from citizen surveys conducted in 2001, 2009 and 2014 in one city. In each survey, respondents provided the nearest intersection to their address. We use inverse distance weighting to map a smooth surface of satisfaction with police over the entire city, which shows broader neighborhood patterns of satisfaction as well as small area hot spots of dissatisfaction. Our results show that hot spots of dissatisfaction with police do not conform to census tract boundaries, but rather align closely with hot spots of crime and police activity. Models predicting satisfaction with police show that local counts of violent crime are the strongest predictors of attitudes towards police, even above individual level predictors of race and age.

In this article we make what are analogs of hot spot maps of crime, but measure dissatisfaction with the police.

One of the interesting findings is that these hot spots do not align nicely with census tracts (the tracts are generalized, we cannot divulge the location of the city). So the areas identified by each procedure would be much different.

As always, feel free to comment or send me an email if you have feedback on the article.

Creating an animated heatmap in Excel

I’ve been getting emails recently about the online Carto service not continuing their free use model. I’ve previously used this service to create animated maps heatmaps over time, in particular a heatmap of reported meth labs over time. That map still currently works, but I’m not sure how long it will though. But the functionality can be replicated in recent versions of Excel, so I will do a quick walkthrough of how to make an animated map. The csv to follow along with, as well as the final produced excel file, you can down download from this link.

I split the tutorial into two parts. Part 1 is prepping the data so the Excel 3d Map will accept the data. The second is making the map pretty.

Prepping the Data

The first part before we can make the map in Excel are:

  1. eliminate rows with missing dates
  2. turn the data into a table
  3. explicitly set the date column to a date format
  4. save as an excel file

We need to do those four steps before we can worry about the mapping part. (It took me forever to figure out it did not like missing data in the time field!)

So first after you have downloaded that data, double click to open the Geocoded_MethLabs.csv file in word. Once that sheet is open select the G column, and then sort Oldest to Newest.

It will give you a pop-up to Expand the selection – keep that default checked and click the Sort button.

After that scroll down to the current bottom of the spreadsheet. There are around 30+ records in this dataset that have missing dates. Go ahead and select the row labels on the left, which highlights the whole row. Once you have done that, right click and then select Delete. Again you need to eliminate those missing records for the map to accept the time field.

After you have done that, select the bottom right most cell, L26260, then scroll back up to the top of the worksheet, hold shift, and select cell A1 (this should highlight all of the cells in the sheet that contain data). After that, select the Insert tab, and then select the Table button.

In the pop-up you can keep the default that the table has headers checked. If you lost the selection range in the prior step, you can simply enter it in as =$A$1:$;$26260.

After that is done you should have a nice blue formatted table. Select the G column, and then right click and select Format Cells.

Change that date column to a specific date format, here I just choose the MM/DD/YY format, but it does not matter. Excel just needs to know it represents a date field.

Finally, you need to save the file as an excel file before we can make the maps. To do this, click File in the top left header menu’s, and then select Save As. Choose where you want to save the file, and then in the Save as Type dropdown in the bottom of the dialog select xlsx.

Now the data is all prepped to create the map.

Making an Animated Map

Now in this part we basically just do a set of several steps to make our map recognize the correct data and then make the map look nice.

With the prior data all prepped, you should be able to now select the 3d Map option that you can access via the Insert menu (just to the right of where the Excel charts are).

Once you click that, you should get a map opened up that looks like mine below.

Here it actually geocoded the points based on the address (very fast as well). So if you only have address data you can still create some maps. Here I want to change the data though so it uses my Lat/Lon coordinates. In the little table on the far right side, under Layer 1, I deleted all of the fields except for Lat by clicking the large to their right (see the X circled in the screenshot below). Then I selected the + Add Field option, and then selected my Lng field.

After you select that you can select the dropdown just to the right of the field and set it is Longitude. Next navigate down slightly to the Time option, and there select the DATE field.

Now here I want to make a chart similar to the Carto graph that is of the density, so in the top of the layer column I select the blog looking thing (see its drawn outline). And then you will get various options like the below screenshot. Adjust these to your liking, but for this I made the radius of influence a bit larger, and made the opacity not 100%, but slightly transparent at 80%.

Next up is setting the color of the heatmap. The default color scale uses the typical rainbow, which should be avoided for multiple reasons, one of which is color-blindness. So in the dropdown for colors select Custom, and then you will get the option to create your own color ramp. If you click on one of the color swatches you will then get options to specify the color in a myriad of ways.

Here I use the multi-hue pink-purple color scheme via ColorBrewer with just three steps. You can see in the above screenshot I set the lowest pink step via the RGB colors (which you can find on the color brewer site.) Below is what my color ramp looks like in the end.

Next part we want to set the style of the map. I like the monotone backgrounds, as it makes the animated kernel density pop out much more (see also my blog post, When should we use a black background for a map). It is easy to experiement with all of these different settings though and see which ones you like more for your data.

Next I am going to change the format of the time notation in the top right of the map. Left click to select the box around the time part, and then right click and select Edit.

Here I change to the simpler Month/Year. Depending on how fast the animation runs, you may just want to change it to year. But you can leave it more detailed if you are manually dragging the time slider to look for trends.

Finally, the current default is to show all of the data permanently. There are examples where you may want to do that (see the famous example by Nathan Yau mapping the growth of Wal Mart), but here we do not want that. So navigate back to the Layer options on the right hand side, and in the little tiny clock above the Time field select the dropdown, and change it to Data shows for an instant.

Finally I select the little cog in the bottom of the map window to change the time options. Here I set the animation to run longer at 30 seconds. I also set the transition duration to slightly longer at 5 seconds. (Think of the KDE as a moving window in time.)

After that you are done! You can zoom in the map, set the slider to run (or manually run it forward/backward). Finally you can export the map to an animated file to share or use in presentations if you want. To do that click the Create Video option in the toolbar in the top left.

Here is my exported video


Now go make some cool maps!

New working paper: The effect of housing demolitions on crime in Buffalo, New York

I have a new working paper up, The effect of housing demolitions on crime in Buffalo, New York. This is in conjunction with my colleagues Dae-Young Kim and Scott Phillips, who are at SUNY Buffalo. Below is the abstract.

Objectives: From 2010 through 2015, the city of Buffalo demolished over 2,000 residences. This study examines whether those demolitions resulted in crime reductions.

Methods: Analysis was conducted at micro places matching demolished parcels to comparable control parcels with similar levels of crime. In addition, spatial panel regression models were estimated at the census tract and quarterly level, taking into account demographic characteristics of neighborhoods.

Results: We find that at the micro place level, demolitions cause a steep drop in reported crime at the exact parcel, and result in additional crime decreases at buffers of up to 1,000 feet away. At the census tract level, results indicated that demolitions reduced Part 1 crimes, but the effect was not statistically significant across different models.

Conclusions: While concerns over crime and disorder are common for vacant houses, the evidence that housing demolitions are an effective crime reduction solution is only partially supported by the analyses here. Future research should compare demolitions in reference to other neighborhood revitalization processes.

As always, if you have feedback/comments let me know.

And here are a few maps from the paper!

Communities and Crime

This was my first semester teaching undergrads at UT Dallas. I taught the Communities and Crime undergrad course. I thought it went very well, and I was impressed with the undergrads here. For the course I had students do a bunch of different prediction assignments based on open data in Dallas, such as predicting what neighborhood has the most crime, or which specific bar has the most assaults. The idea being they would use the theories I discussed in the prior lecture to make the best predictions.

For their final assignment, I had students predict an arbitrary area to capture the most robberies in 2016 (up to that point they had only been predicting crimes in 2015). I used the same metric that NIJ is using in their crime forecasting challenge – the predictive accuracy index. This is simply % crime/% area, so students who give larger areas are more penalized. This ended up producing a pretty neat capstone to the end of the semester.

Below is a screen shot of the map, and here is a link to an interactive version. (WordPress.com sites only allow specific types of iframe sources, so my dropbox src link to the interactive Leaflet map gets stripped.)

Look forward to teaching this class again (as of now it seems I will regularly offer it every spring).

More news on classes to come soon. I am teaching GIS applications in Criminology online over the summer. For a quick idea about the content, it will be almost the same as the GIS course in criminal justice I previously taught at SUNY.

In short, if you think maps rock then you should take my classes 😉

Scraping Meth Labs with Python

For awhile in my GIS courses I have pointed to the DEA’s website that has a list of busted meth labs across the county, named the National Clandestine Laboratory Register. Finally a student has shown some interest in this, and so I spent alittle time writing a scraper in Python to grab the data. For those who would just like the data, here I have a csv file of the scraped labs that are geocoded to the city level. And here is the entire SPSS and Python script to go from the original PDF data to the finished product.

So first off, if you visit the DEA website, you will see that each state has its own PDF file (for example here is Texas) that lists all of the registered labs, with the county, city, street address, and date. To turn this into usable data, I am going to do three steps in Python:

  1. download the PDF file to my local machine using urllib python library
  2. convert that PDF to an xml file using the pdftohtml command line utility
  3. use Beautifulsoup to parse the xml file

I will illustrate each in turn and then provide the entire Python script at the end of the post.

So first, lets import the libraries we need, and also note I downloaded the pdftohtml utility and placed that location as a system path on my Windows machine. Then we need to set a folder where we will download the files to on our local machine. Finally I create the base url for our meth labs.

from bs4 import BeautifulSoup
import urllib, os

myfolder = r'C:\Users\axw161530\Dropbox\Documents\BLOG\Scrape_Methlabs\PDFs' #local folder to download stuff
base_url = r'https://www.dea.gov/clan-lab' #online site with PDFs for meth lab seizures

Now to just download the Texas pdf file to our local machine we would simply do:

a = 'tx'
url = base_url + r'/' + a + '.pdf'
file_loc = os.path.join(myfolder,a)
urllib.urlretrieve(url,file_loc + '.pdf')

If you are following along and replaced the path in myfolder with a folder on your personal machine, you should now see the Texas PDF downloaded in that folder. Now I am going to use the command line to turn this PDF into an xml document using the os.system() function.

#Turn to xml with pdftohtml, does not need xml on end
cmd = 'pdftohtml -xml ' + file_loc + ".pdf " + file_loc
os.system(cmd)

You should now see that there is an xml document to go along with the Texas file. You can check out its format using a text editor (wordpress does not seem to like me showing it here).

So basically we can use the top and the left attributes within the xml to identify what row and what column the items are in. But first, we need to read in this xml and turn it into a BeautifulSoup object.

MyFeed = open(file_loc + '.xml')
textFeed = MyFeed.read()
FeedParse = BeautifulSoup(textFeed,'xml')
MyFeed.close()

Now the FeedParse item is a BeautifulSoup object that you can query. In a nutshell, we have a top level page tag, and then within that you have a bunch of text tags. Here is the function I wrote to extract that data and dump it into tuples.

#Function to parse the xml and return the line by line data I want
def ParseXML(soup_xml,state):
    data_parse = []
    page_count = 1
    pgs = soup_xml.find_all('page')
    for i in pgs:
        txt = i.find_all('text')
        order = 1
        for j in txt:
            value = j.get_text() #text
            top = j['top']
            left = j['left']
            dat_tup = (state,page_count,order,top,left,value)
            data_parse.append(dat_tup)
            order += 1
        page_count += 1
    return data_parse

So with our Texas data, we could call ParseXML(soup_xml=FeedParse,state=a) and it will return all of the data nested in those text tags. We can just put these all together and loop over all of the states to get all of the data. Since the PDFs are not that large it works quite fast, under 3 minutes on my last run.

from bs4 import BeautifulSoup
import urllib, os

myfolder = r'C:\Users\axw161530\Dropbox\Documents\BLOG\Scrape_Methlabs\PDFs' #local folder to download stuff
base_url = r'https://www.dea.gov/clan-lab' #online site with PDFs for meth lab seizures
                                           #see https://www.dea.gov/clan-lab/clan-lab.shtml
state_ab = ['al','ak','az','ar','ca','co','ct','de','fl','ga','guam','hi','id','il','in','ia','ks',
            'ky','la','me','md','ma','mi','mn','ms','mo','mt','ne','nv','nh','nj','nm','ny','nc','nd',
            'oh','ok','or','pa','ri','sc','sd','tn','tx','ut','vt','va','wa','wv','wi','wy','wdc']
            
state_name = ['Alabama','Alaska','Arizona','Arkansas','California','Colorado','Connecticut','Delaware','Florida','Georgia','Guam','Hawaii','Idaho','Illinois','Indiana','Iowa','Kansas',
              'Kentucky','Louisiana','Maine','Maryland','Massachusetts','Michigan','Minnesota','Mississippi','Missouri','Montana','Nebraska','Nevada','New Hampshire','New Jersey',
              'New Mexico','New York','North Carolina','North Dakota','Ohio','Oklahoma','Oregon','Pennsylvania','Rhode Island','South Carolina','South Dakota','Tennessee','Texas',
              'Utah','Vermont','Virginia','Washington','West Virginia','Wisconsin','Wyoming','Washington DC']

all_data = [] #this is the list that the tuple data will be stashed in

#Function to parse the xml and return the line by line data I want
def ParseXML(soup_xml,state):
    data_parse = []
    page_count = 1
    pgs = soup_xml.find_all('page')
    for i in pgs:
        txt = i.find_all('text')
        order = 1
        for j in txt:
            value = j.get_text() #text
            top = j['top']
            left = j['left']
            dat_tup = (state,page_count,order,top,left,value)
            data_parse.append(dat_tup)
            order += 1
        page_count += 1
    return data_parse

#This loops over the pdfs, downloads them, turns them to xml via pdftohtml command line tool
#Then extracts the data

for a,b in zip(state_ab,state_name):
    #Download pdf
    url = base_url + r'/' + a + '.pdf'
    file_loc = os.path.join(myfolder,a)
    urllib.urlretrieve(url,file_loc + '.pdf')
    #Turn to xml with pdftohtml, does not need xml on end
    cmd = 'pdftohtml -xml ' + file_loc + ".pdf " + file_loc
    os.system(cmd)
    #parse with BeautifulSoup
    MyFeed = open(file_loc + '.xml')
    textFeed = MyFeed.read()
    FeedParse = BeautifulSoup(textFeed,'xml')
    MyFeed.close()
    #Extract the data elements
    state_data = ParseXML(soup_xml=FeedParse,state=b)
    all_data = all_data + state_data

Now to go from those sets of tuples to actually formatted data takes a bit of more work, and I used SPSS for that. See here for the full set of scripts used to download, parse and clean up the data. Basically it is alittle more complicated than just going from long to wide using the top marker for the data as some rows are off slightly. Also there is complications for long addresses being split across two lines. And finally there are just some data errors and fields being merged together. So that SPSS code solves a bunch of that. Also that includes scripts to geocode the to the city level using the Google geocoding API.

Let me know if you do any analysis of this data! I quickly made a time series map of these events via CartoDB. You can definately see some interesting patterns of DEA concentration over time, although I can’t say if that is due to them focusing on particular areas or if they are really the areas with the most prevalent Meth lab problems.

Spatial join points to polygons using Python and SPSS

A recent use case of mine I had around 60 million points that I wanted to assign to census block groups. ArcGIS was being problematic to simply load in the 60 million point dataset (let alone spatial join it), so I wrote some python code and will show using python and SPSS how to accomplish this.

First, a shout out to Rex Douglass and this blog post, I’ve adapted most of the python code here from that example. Also before we get started, it will be necessary to download several geospatial libraries for python. Here you need shapely, pyshp, and rtree. As a note, I have only been able to get these to install and work using the IOOS channel for Anaconda, e.g. conda install -c ioos shapely rtree pyshp. (I have not been able to get fiona to work.)

The Python Part

So I will go through a quick rundown of the python code first. All of the data and code to run this yourself can be downloaded here. To start, I import all of the necessary libraries and functions.

import shapefile
from rtree import index
from shapely.geometry import Polygon, Point

The next step is to read in the polygon shapefile that we want to assign points to. Note you could swap this part out with fiona (if you can get it working!), but I just use the pyshp function shapefile.Reader. Note you need to change the data string to point to where the shapefile containing your polygons is located on your local machine.

#load in the shapefile of block groups
data = r'C:\Users\axw161530\Dropbox\Documents\BLOG\Point_inPoly_PythonSPSS'
bg_NYC = shapefile.Reader(data + r'\NYC_BG14_Proj.shp')

In my data these are block groups for New York city, and they are projected into feet using a local projection. (As an FYI, you can open up the “prj” file for shapefiles in a plain text editor to see the projection.) Now, the shapefile object, bg_NYC here, has several iterables that you can access either the geometries or the records available. First we need to get those individual polygons and stuff into a list, and then convert into a Polygon object shapely can deal with.

bg_shapes = bg_NYC.shapes()  #get the iterable for the polygon boundary points
bg_points = [q.points for q in bg_shapes] #convert to list of geometry
polygons = [Polygon(q) for q in bg_points] #convert to a shapely Polygon

Next I am going to do two things. First to make a vector that matches those Polygons to a particular id, I need to read in the data attributes from the shapefile. This is accomplished via the .records() attribute. For US census geometries they have what is oft labeled a GEOID. In this example shapefile the GEOID ends up being in the second variable slot. The second thing I accomplish here is I build an rtree lookup. The motivation for this is, when we do a point in polygon check, it can be an expensive procedure the more polygons you have. You can first limit the number of potential polygons to check though by only checking whether a point falls within the bounding box of a polygon, and then do the more expensive operation on the actual (more complicated) boundary of the polygon.

#build spatial index from bounding boxes
#also has a second vector associating area IDs to numeric id
bg_records = bg_NYC.records() #bg_records[0][1] is the geoid
idx = index.Index() #creating an rtree
c_id = 0
area_match = []
for a,b in zip(bg_shapes,bg_records):
    area_match.append(b[1])
    idx.insert(c_id,a.bbox,obj=b[1])
    c_id += 1

Now we have all the necessary ingredients to make a function that inputs one X,Y point, and then returns a GEOID. First, the function turns the input X,Y points into a Point object shapely can work with. Second, it does the bounding box lookup I mentioned earlier, using the idx rtree that is available in the global environment. Third, it loops over those resulting polygons that intersected the bounding box, and checks to see if the point is within that polygon using the shapely operation point.within(polygon). If that is true, it returns the associated GEOID, and if none are found it returns None. Again, the objects in this function idx, polygons, and area_match are taken from the global environment. A few additional notes: it will return the first point in polygon found, so if you have overlapping polygons this will simply return the first, not necessarily all of them. That is not the case with our census polygons here though. Second, the functionality here is for a point on the exact border between two polygons to return False.

#now can define function with polygons, area_match, and idx as globals
def assign_area(x,y):
    point = Point(x,y)
    for i in idx.intersection((x,y,x,y)): 
        if point.within(polygons[i]):
            return area_match[i]
    return None
#note points on the borders will return None

To test this function I have a set of points in New York for this particular projection already associated with a GEOID.

#now testing
test_vec = [(1003610, 239685, '360050063002'),
            (1006787, 240666, '360050183022'),
            ( 993580, 219484, '360610122001'),
            ( 986385, 214971, '360610115001'),
            ( 947148, 167688, '360850201001'),
            (      0,      0, 'Miss')]

for a,b,c in test_vec:
    print [assign_area(x=a,y=b),c]

And this should subsequently print out at your console:

['360050063002', '360050063002']
['360050183022', '360050183022']
['360610122001', '360610122001']
['360610115001', '360610115001']
['360850201001', '360850201001']
[None, 'Miss']

For those wishing to do this in vectorized in python, check out the GeoPanda’s functionality. But here I let it churn out one by one by using SPSS.

The SPSS Part

So once the above function is defined in your SPSS environment, we can simply use SPSSINC TRANS to assign XY data to a block group. Here is a quick example. First we read in some data, this is the homicide data from the New York times discussed here. It has the points projected in the same feet as the polygons were.

*Conducting point in polygon tests with Python and SPSS.
FILE HANDLE data /NAME = "C:\Users\axw161530\Dropbox\Documents\BLOG\Point_inPoly_PythonSPSS".
*Read in the NYC homicide data.
GET TRANSLATE FILE='data\HomPoints_JoinBG.dbf' /TYPE=DBF /MAP .
DATASET NAME HomData.

Now I am going to use the SPSS command SHOW to display the current date and time, (so you can see how long the operation takes). This dataset has 4,021 cases of homicide, and the set of polygons we are matching to has around 6,500 block groups. The time the operation takes depends on both, but the rtree search should make the number of polygons not as big a deal as simply looping through all of them. Second, I use SPSSINC TRANS to call the python function we previously constructed. Third, this dataset already has the GEOID matched to the points (via ArcGIS), so I check to make sure I get the same results as ArcGIS. In this example there are quite a few points that ArcGIS failed to return a match for, but this operation does. (It would take more investigation on my part though as to why that is the case.)

*Use this to show timing.
SHOW $VAR.

*Now using SPSSINC TRANS to assign geoid.
SPSSINC TRANS RESULT=GeoID2 TYPE=12
  /FORMULA "assign_area(x=XFt,y=YFt)".

SHOW $VARS.
*Check that the operations are all correct (as compared to ArcGIS)
COMPUTE Check = (GEOID = GEOID2).
FREQ Check.

This example runs almost instantly. For some tests with my bigger dataset of 60 million, matching half a million points to this set of polygons took around 12 minutes.

To End

Again, all of the data and code to run this at once can be downloaded here. I will need to make a blog post at some point of using pyproj to project point data in SPSS as well, such as to go to and from Lat-Lon to a local projection. You probably always want to do geometric operations like this and buffers with projected data, but you may get the data in Lat-Lon or want to export data in Lat-Lon to use online maps.

For those working with crime data, I oft complain that crime is frequently on the borders of census geographies. But due to slight differences in resolution, most GIS systems will still assign crime points to census geographies. I’m not sure if it is a big problem for much analysis in our field, but the proportion on the border is clearly quite large in some instances. For things that can occur often outdoors, like robberies and field stops, the proportion is even higher because crime is often recorded at intersections (I have estimates for the percentage of crimes at intersections for 14 years in Albany in this paper). So the problem depends on the crime type or nature of the incident (traffic stops are almost always listed at intersections), but I have seen analysis I would bet over 50% of the incidents are on the border of census blocks and/or block groups.

A general way to check this in GIS is to turn your polygon data into lines, and then assign points to the nearest line and check the distance. You will see many points that are very close to the border (say within 5 meters) that really should be undetermined.