For a simplified example consider the network below, with individuals baseline probabilities of future risk noted in the nodes. Lets say the local treatment effect reduces the probability to 0, and the spillover effect reduces the probability by half, and you can only treat 1 node. Who do you treat?
We could select the person with the highest baseline probability (B), and the reduced effect ends up being 0.5(B) + 0.1(E) = 0.6
(the 0.1 is for the spillover effect for E). We could choose node A, which is a higher baseline probability and has the most connections, and the reduced effect is 0.4(A) + 0.05(C) + 0.05(D) + 0.1(E) = 0.6
. But it ends up in this network the optimal node to choose is E, because the spillovers to A and B justify choosing a lower probability individual, 0.2(E) + 0.2(A) + 0.25(B) = 0.65
.
Using this idea of a local effect and a spillover effect, I formulated an integer linear program with the same idea of a local treatment effect and a spillover effect:
Where is the reduction in the probability due to the local effect, and is the reduction in the probability due to the spillover effect. These probabilities are fixed values you know at the onset, e.g. estimated from some model like in Wheeler, Worden, and Silver (2019) (and Papachristos has related work using the network itself to estimate risk). Each node, i, then gets two decision variables; will equal 1 if that node is treated, and will equal 1 if the node gets a spillover effect (depending on who is treated). Actually the findings in WP show that these effects are not additive (you don’t get extra effects if you are treated and your neighbors are treated, or if you have multiple neighbors treated), and this makes it easier to keep the problem on the probability scale. So we then have our constraints:
Constraint 1 is that these are binary 0/1 decision variables. Constraint 2 is we limit the number of people treated to K (a value that we choose). Constraint 3 ensures that if a local decision variable is set to 1, then the spillover variable has to be set to 0. If the local is 0, it can be either 0 or 1. Constraint 4 looks at the neighbor relations. For Node i, if any of its neighbors local treated decision variable is set to 1, the Spillover decision variable can be set to 1.
So in the end, if the number of nodes is n
, we have 2*n
decision variables and 2*n + 1
constraints, I find it easier just to look at code sometimes, so here is this simple network and problem formulated in python using networkx and pulp. (Here is a full file of the code and data used in this post.)
####################################################
import pulp
import networkx
Nodes = ['a','b','c','d','e']
Edges = [('a','c'),
('a','d'),
('a','e'),
('b','e')]
p_l = {'a': 0.4, 'b': 0.5, 'c': 0.1, 'd': 0.1,'e': 0.2}
p_s = {'a': 0.2, 'b': 0.25, 'c': 0.05, 'd': 0.05,'e': 0.1}
K = 1
G = networkx.Graph()
G.add_edges_from(Edges)
P = pulp.LpProblem("Choosing Network Intervention", pulp.LpMaximize)
L = pulp.LpVariable.dicts("Treated Units", [i for i in Nodes], lowBound=0, upBound=1, cat=pulp.LpInteger)
S = pulp.LpVariable.dicts("Spillover Units", [i for i in Nodes], lowBound=0, upBound=1, cat=pulp.LpInteger)
P += pulp.lpSum( p_l[i]*L[i] + p_s[i]*S[i] for i in Nodes)
P += pulp.lpSum( L[i] for i in Nodes ) == K
for i in Nodes:
P += pulp.lpSum( S[i] ) <= 1 + 1*L[i]
for i in Nodes:
ne = G.neighbors(i)
P += pulp.lpSum( L[j] for j in ne ) >= S[i]
P.solve()
#Should select e for local, and a & b for spillover
print(pulp.value(P.objective))
print(pulp.LpStatus[P.status])
for n in Nodes:
print([n,L[n].varValue,S[n].varValue])
####################################################
And this returns the correct results, that node E is chosen in this example, and A and B have the spillover effects. In the linked code I provided a nicer function to just pipe in your network, your two probability reduction estimates, and the number of treated units, and it will pipe out the results for you.
For an example with a larger network for just proof of concept, I conducted the same analysis, choosing 20 people to treat in a network of 311 nodes I pulled from Rostami and Mondani (2015). I simulated some baseline probabilities to pipe in, and made it so the local treatment effect was a 50% reduction in the probability, and a spillover effect was a 20% reduction. Here red squares are treated, pink circles are the spillover, and nontreated are grey circles. It did not always choose the locally highest probability (largest nodes), but did tend to choose highly connected folks also with a high probability (but also chose some isolate nodes with a high probability as well).
This problem is solved in an instant. And I think out of the box this will work for even large networks of say over 100,000 nodes (I have let CPLEX churn on problems with near half a million decision variables on my desktop overnight). I need to check myself to make 100% sure though. A simple way to make the problem smaller if needed though is to conduct the analysis on subsets of connected components, and then shuffle the results back together.
Looking at the results, it is very similar to my choosing representatives work (Wheeler et al., 2019), and I think you could get similar results with just piping in 1’s for each of the local and spillover probabilities. One of the things I want to work on going forward though is treatment noncompliance. So if we are talking about giving some of these folks social services, they don’t always take up your offer (this is a problem in choose rep’s for call ins as well). WP actually relied on this to draw control nodes in their analysis. I thought for a bit the problem with treatment noncompliance in this setting was intractable, but another paper on a totally different topic (Bogle et al., 2019) has given me some recent hope that it can be solved.
This same idea is also is related to hot spots policing (think spatial diffusion of benefits). And I have some ideas about that to work on in the future as well (e.g. how wide of net to cast when doing hot spots interventions given geographical constraints).
]]>
Two tricky parts to this: 1) making the north arrow and scale bar, and 2) figuring out the dimensions to make regular hexagons. As an illustration I use the shooting victim data from Philly (see the working paper for all the details) full data and code to replicate here. I will walk through a bit of it though.
First to start out, I just use these three libraries, and set the working directory to where my data is.
library(ggplot2)
library(rgdal)
library(proj4)
setwd('C:\\Users\\axw161530\\Dropbox\\Documents\\BLOG\\HexagonMap_ggplot\\Analysis')
Now I read in the Philly shooting data, and then an outline of the city that is projected. Note I read in the shapefile data using rgdal
, which imports the projection info. I need that to be able to convert the latitude/longitude spherical coordinates in the shooting data to a local projection. (Unless you are making a webmap, you pretty much always want to use some type of local projection, and not spherical coordinates.)
#Read in the shooting data
shoot < read.csv('shootings.csv')
#Get rid of missing
shoot < shoot[!is.na(shoot$lng),c('lng','lat')]
#Read in the Philly outline
PhilBound < readOGR(dsn="City_Limits_Proj.shp",layer="City_Limits_Proj")
#Project the Shooting data
phill_pj < proj4string(PhilBound)
XYMeters < proj4::project(as.matrix(shoot[,c('lng','lat')]), proj=phill_pj)
shoot$x < XYMeters[,1]
shoot$y < XYMeters[,2]
It is a bit of work to make a nice basemap in R and ggplot, but once that upfront work is done then it is really easy to make more maps. To start, the GISTools
package has a set of functions to get a north arrow and scale bar, but I have had trouble with them. The ggsn
package imports the north arrow as a bitmap instead of vector, and I also had a difficult time with its scale bar function. (I have not figured out the cartography
package either, I can’t keep up with all the mapping stuff in R!) So long story short, this is my solution to adding a north arrow and scale bar, but I admit better solutions probably exist.
So basically I just build my own polygons and labels to add into the map where I want. Code is motivated based on the functions in GISTools
.
#creating north arrow and scale bar, motivation from GISTools package
arrow_data < function(xb, yb, len) {
s < len
arrow.x = c(0,0.5,1,0.5,0)  0.5
arrow.y = c(0,1.7 ,0,0.5,0)
adata < data.frame(aX = xb + arrow.x * s, aY = yb + arrow.y * s)
return(adata)
}
scale_data < function(llx,lly,len,height){
box1 < data.frame(x = c(llx,llx+len,llx+len,llx,llx),
y = c(lly,lly,lly+height,lly+height,lly))
box2 < data.frame(x = c(llxlen,llx,llx,llxlen,llxlen),
y = c(lly,lly,lly+height,lly+height,lly))
return(list(box1,box2))
}
x_cent < 830000
len_bar < 3000
offset_scaleNum < 64300
arrow < arrow_data(xb=x_cent,yb=67300,len=2500)
scale_bxs < scale_data(llx=x_cent,lly=65000,len=len_bar,height=750)
lab_data < data.frame(x=c(x_cent, x_centlen_bar, x_cent, x_cent+len_bar, x_cent),
y=c( 72300, offset_scaleNum, offset_scaleNum, offset_scaleNum, 66500),
lab=c("N","0","3","6","Kilometers"))
This is about the best I have been able to automate the creation of the north arrow and scale bar polygons, while still having flexibility where to place the labels. But now we have all of the ingredients necessary to make our basemap. Make sure to use coord_fixed()
for maps! Also for background maps I typically like making the outline thicker, and then have borders for smaller polygons lighter and thinner to create a hierarchy. (If you don’t want the background map to have any color, use fill=NA
.)
base_map < ggplot() +
geom_polygon(data=PhilBound,size=1.5,color='black', fill='darkgrey', aes(x=long,y=lat)) +
geom_polygon(data=arrow, fill='black', aes(x=aX, y=aY)) +
geom_polygon(data=scale_bxs[[1]], fill='grey', color='black', aes(x=x, y = y)) +
geom_polygon(data=scale_bxs[[2]], fill='white', color='black', aes(x=x, y = y)) +
geom_text(data=lab_data, size=4, aes(x=x,y=y,label=lab)) +
coord_fixed() + theme_void()
#Check it out
base_map
This is what it looks like on my windows machine in RStudio — it ends up looking alittle different when I export the figure straight to PNG though. Will get to that in a minute.
Now you have your basemap you can superimpose whatever other data you want. Here I wanted to visualize the spatial distribution of shootings in Philly. One option is a kernel density map. I tend to like aggregated count maps though better for an overview, since I don’t care so much for drilling down and identifying very specific hot spots. And the counts are easier to understand than densities.
In geom_hex
you can supply a vertical and horizontal parameter to control the size of the hexagon — supplying the same for each does not create a regular hexagon though. The way the hexagon is oriented in geom_hex
the vertical parameter is vertex to vertex, whereas the horizontal parameter is side to side.
Here are three helper functions. First, wd_hex
gives you a horizontal width length given the vertical parameter. So if you wanted your hexagon to be vertex to vertex to be 1000 meters (so a side is 500 meters), wd_hex(1000)
returns just over 866. Second, if for your map you wanted to convert the numbers to densities per unit area, you can use hex_area
to figure out the size of your hexagon. Going again with our 1000 meters vertex to vertex hexagon, we have a total of hex_area(1000/2)
is just under 650,000 square meters (or about 0.65 square kilometers).
For maps though, I think it makes the most sense to set the hexagon to a particular area. So hex_dim
does that. If you want to set your hexagons to a square kilometer, given our projected data is in meters, we would then just do hex_dim(1000^2)
, which with rounding gives us vert/horz measures of about (1241,1075) to supply to geom_hex
.
#ggplot geom_hex you need to supply height and width
#if you want a regular hexagon though, these
#are not equal given the default way geom_hex draws them
#https://www.varsitytutors.com/high_school_mathhelp/howtofindtheareaofahexagon
#get width given height
wd_hex < function(height){
tri_side < height/2
sma_side < height/4
width < 2*sqrt(tri_side^2  sma_side^2)
return(width)
}
#now to figure out the area if you want
#side is simply height/2 in geom_hex
hex_area < function(side){
area < 6 * ( (sqrt(3)*side^2)/4 )
return(area)
}
#So if you want your hexagon to have a regular area need the inverse function
#Gives height and width if you want a specific area
hex_dim < function(area){
num < 4*area
den < 6*sqrt(3)
vert < 2*sqrt(num/den)
horz < wd_hex(height)
return(c(vert,horz))
}
my_dims < hex_dim(1000^2) #making it a square kilometer
sqrt(hex_area(my_dims[1]/2)) #check to make sure it is square km
#my_dims also checks out with https://hexagoncalculator.apphb.com/
Now onto the good stuff. I tend to think discrete bins make nicer looking maps than continuous fills. So through some trial/error you can figure out the best way to make those via cut
. Also I make the outlines for the hexagons thin and white, and make the hexagons semitransparent. So you can see the outline for the city. I like how by default areas with no shootings are not given any hexagon.
lev_cnt < seq(0,225,25)
shoot_count < base_map +
geom_hex(data=shoot, color='white', alpha=0.85, size=0.1, binwidth=my_dims,
aes(x=x,y=y,fill=cut(..count..,lev_cnt))) +
scale_fill_brewer(name="Count Shootings", palette="OrRd")
We have come so far, now to automate exporting the figure to a PNG file. I’ve had trouble getting journals recently to not bungle vector figures that I forward them, so I am just like going with high res PNG to avoid that hassle. If you render the figure and use the GUI to export to PNG, it won’t be as high resolution, so you can often easily see aliasing pixels (e.g. the pixels in the North Arrow for the earlier base map image).
png('Philly_ShootCount.png', height=5, width=5, units="in", res=1000, type="cairo")
shoot_count
dev.off()
Note the font size/location in the exported PNG are often not quite exactly as they are when rendered in the RGUI window or RStudio on my windows machine. So make sure to check the PNG file.
My notes on how to get this to follow. Data and code to follow along can be downloaded from here.
First in my do file, I have a typical start up that sets the working directory and logs the results to a text file. I use set more off
so I don’t have to do the annoying this and tell Stata to keep scrolling down. The next part is partly idiosyncratic to my Stata work set up — I call Stata from a centralized install location here at EPPS in UTD. I don’t have write access there, so to install commands I need to set my own place to install them on my local machine. So I add a location to adopath
that is on my machine, and I also do net set ado
to that same location.
Finally, for here I ssc installed grstyle
and palettes
. The code is currently commented out, as I only need to install it once. But it is good for others to know what extra packages they need to fully replicate your results.
**************************************************************************
*START UP STUFF
*Set the working directory and plain text log file
cd "C:\Users\axw161530\Dropbox\Documents\BLOG\Stata_NiceMargins\Analysis"
*log the results to a text file
log using "LogitModels.txt", text replace
*so the output just keeps going
set more off
*let stata know to search for a new location for stata plug ins
adopath + "C:\Users\axw161530\Documents\Stata_PlugIns\V15"
net set ado "C:\Users\axw161530\Documents\Stata_PlugIns\V15"
*In this script I use
*net install http://www.statajournal.com/software/sj183/gr0073/
*ssc install grstyle, replace
*ssc install palettes, replace
**************************************************************************
Here is what I did to change my default graph settings. Again check out Ben Jann’s awesome website he made an all the great examples. That will be more productive than me commenting on every individual line.
**************************************************************************
*Graph Settings
grstyle clear
set scheme s2color
grstyle init
grstyle set plain, box
grstyle color background white
grstyle set color Set1
grstyle yesno draw_major_hgrid yes
grstyle yesno draw_major_ygrid yes
grstyle color major_grid gs8
grstyle linepattern major_grid dot
grstyle set legend 4, box inside
grstyle color ci_area gs12%50
**************************************************************************
So here is pretty straight forward. I read in the data as a CSV file, generate a new variable that is the weekly average number of crimes within 1000 feet in the historical crime data (see the working paper for more details). One trick I like to use with regression models with many terms is to make a global that specifies those variables, so I don’t need to retype them a bunch. I named it $ContVars
here. Finally for simplicity in this script I am just examining the burglary incidents, so I get rid of the other crimes using the keep
command.
**************************************************************************
*DATA PREP
*Getting the data
import delimited CrimeStrings_withData.csv
*Making the previous densities per time period
generate buff_1000_1 = buff_1000 * (7/1611)
*control variables used in the regression
global ContVars "d1 d2 d3 d4 d5 d6 d7 d8 d9 d10 d11 d12 d13 d14 d15 d16 d17 d18 whiteperc blackperc hispperc asianperc under17 propmove perpoverty perfemheadhouse perunemploy perassist i.month c.dateint"
*For here I am just examining burglary incidents
keep if crimetype == 3
**************************************************************************
So basically what I want to do in the end is to draw an interaction effect between a dummy variable (whether a crime resulted in an arrest) and a continuous variable (the historical crime density at a location). I am predicting whether a crime results in a nearrepeat follow up — hot spots with more crime on average will have more nearrepeats simply by chance.
When displaying that interaction effect though, I only want to limit it to the support of the historical crime density in the sample. Or stated another way, the historical crime density variable basically ranges from 0 to 2.5 in the sample — I don’t care what the interaction effect is then at a historical crime density of 3.
To do that in Stata, I use summarize
to get the min/max of that historical crime density and pipe them into a global. The Grid global will then tell Stata how often to calculate those effects. Too few and the plot may not look smooth, too many and it will take margins forever to calculate the results. Here 100 points is plenty.
*I will need this later to draw the margins over the support
*Of the prior crime density
summarize buff_1000_1
global MyMin = r(min)
global MyMax = r(max)
global Grid = ($MyMax$MyMin)/100
This may seem overkill, as I could just fill in those values by hand later. If you look at my replication code for my paper though, I ended up doing this same thing for four different crimes and two different estimates, so I wanted as automated approach that avoids as many magic numbers as possible.
Now I estimate my logistic regression model, with my interaction effect and the global $ContVars
control variables I specified earlier. Here I am predicting whether a burglary has a follow up nearrepeat crime (within 1000 feet and 7 days). I think an arrest will reduce that probability.
*Now estimate the logit model
logit future0_1000_7 i.arrest c.buff_1000_1 i.arrest#c.buff_1000_1 $ContVars
Note that the estimate of the interaction effect looks like this:

future0_1000_7  Coef. Std. Err. z P>z [95% Conf. Interval]
+
1.arrest  .0327821 .123502 0.27 0.791 .2748415 .2092774
buff_1000_1  1.457795 .0588038 24.79 0.000 1.342542 1.573048

arrest#c.buff_1000_1 
1  .5013652 .2742103 1.83 0.067 1.038807 .0360771
So how exactly do I interpret this? It is very difficult to interpret the coefficients directly — it is much easier to make graphs and visualize what those effects actually mean on the probability of a nearrepeat burglary occurring.
Now the good stuff. Basically I want to show the predicted probability of a nearrepeat follow up crime, conditional on whether an arrest occurred, as well as the historical crime density. The first line uses quietly
, so I don’t get the full margins table in the output. The second is just all the goodies to make my nice grey scale plot. Note I name the plot — this will be needed for later combining multiple plots.
*Create the two margin plots
quietly margins arrest, at(c.buff_1000_1=($MyMin ($Grid) $MyMax))
marginsplot, recast(line) noci title("Residential Burglary, Predictive Margins") xtitle("Historical Crime Density") ytitle("Pr(Future Crime = 1)") plot1opts(lcolor(black)) plot2opts(lcolor(gs6) lpattern("")) legend(on order(1 "no arrest" 2 "arrest")) name(main)
You could superimpose confidence intervals on the prior plot, but those are the pointwise intervals for the probability for each individual line, they don’t directly tell you about the difference between the two lines. Viz. the difference in lines often can lead to misinterpretation (e.g. remember the Cleveland example of viz. the differences in exports/imports originally drawn by Playfair). Also superimposing multiple error bands tend to get visually messy. So a solution is to directly graph the estimate of the difference between those two probabilities in a separate graph. (Another idea though I’ve seen is here, a CI of the difference set to the midpoint of the two lines.)
quietly margins, dydx(arrest) at(c.buff_1000_1=($MyMin ($Grid) $MyMax))
marginsplot, recast(line) plot1opts(lcolor(gs8)) ciopt(color(black%20)) recastci(rarea) title("Residential Burglary, Average Marginal Effects of Arrest") xtitle("Historical Crime Density") ytitle("Effects on Pr(Future Crime)") name(diff)
Yay for the fact that Stata can now draw transparent areas. So here we can see that even though the marginal effect grows at higher prior crime densities — suggesting an arrest has a larger effect on reducing near repeats in hot spots, the confidence interval of the difference grows larger as well.
To end I combine the two plots together (same image at the beginning of the post), and then export them to a higher resolution PNG.
*Now combining the plots together
graph combine main diff, xsize(6.5) ysize(2.7) iscale(.8) name(comb)
graph close main diff
graph export "BurglaryMarginPlot.png", width(6000) replace
I am often doing things interactively in the Stata shell when I am writing up scripts. Including redoing charts. To be able to redo a chart with the same name, you need to not only use graph close
, but also graph drop
it from memory. Then just dropping all the data and using exit
will finish out your script and close down Stata entirely.
**************************************************************************
*FINISHING UP THE SCRIPT
*closing the combined graph
graph close comb
*This is necessary if you want to reuse the plot names
graph drop _all
*Finish the script.
drop _all
exit, clear
**************************************************************************
]]>I’ve posted Python code to replicate the analysis, including the original network nodes and edges group data. I figured I would go through a quick example of applying the code for others to use the algorithm.
The main idea is that for a focused deterrence initiative, for the callins you want to identify folks to spread the deterrence message around the network. When working with several PDs I figured looking at who was called in would be interesting. Literally the first network graph I drew was below on the left — folks who were called in are the big red squares. This was one of the main problem gangs, and the PD had done several callins for over a year at this point. Those are not quite the worst set of four folks to callin based on the topology of the network, but damn close.
But to criticize the PD I need to come up with a better solution — which is the graph on the right hand side. The larger red squares are my suggested callins, and they reach everyone within one step. That means everyone is at most just one link away from someone who attended the callin. This is called a dominant set of a graph when all of the graph is colored in.
Below I give a quicker example using my code for others to generate the dominant set (instead of going through all of the replication analysis). If you are a PD interested in applying this for your focused deterrence initiative let me know!
So first to set up your python code, I import all of the needed libraries (only nonstandard is networkx). Then I import my set of functions, named MyFunctions.py
, and then change the working directory.
############################################################
#The libraries I need
import itertools
import networkx as nx
import csv
import sys
import os
#Now importing my own functions I made
locDir = r'C:\Users\axw161530\Dropbox\Documents\BLOG\DominantSet_Python'
sys.path.append(locDir)
from MyFunctions import *
#setting the working directory to this location
os.chdir(locDir)
#print(os.getcwd())
############################################################
The next part I read in the CSV data for City 4 Gang 1, both the nodes and the edges. Then I create a networkx graph simply based on the edges. Technically I do not use the node information at all for this, just the edges that list a source and a target.
############################################################
#Reading in the csv files that have the nodes and the edges
#And turning into a networkX graph
#simple function to read in csv files
def ReadCSV(loc):
tup = []
with open(loc) as f:
z = csv.reader(f)
for row in z:
tup.append(tuple(row))
return tup
#Turning my csv files into networkx objects
nd = ReadCSV('Nodes_City4_Gang1.csv')
ed = ReadCSV('Edges_City4_Gang1.csv')
head_node = nd.pop(0) #First row for both is a header
head_edge = ed.pop(0)
#Turning my csv files into networkx objects
C1G4 = nx.Graph()
C1G4.add_edges_from(ed)
############################################################
Now it is quite simple, to get my suggested dominant set it is simple as this function call:
ds_C1G4 = domSet_Whe(C1G4)
print(ds_C1G4)
In my current session this gives the edges ['21', '18', '17', '16', '3', '22', '20', '6']
. Which if you look to my original graph is somewhat different, but all are essentially single swaps where the best node to choose is arbitrary.
I have a bunch of other functions in the analysis, one of interest will be given who is under probation/parole who are the best people to call in (see the domSet_WheSub
function). Again if you are interested in pursuing this further always feel free to reach out to me.
The police do not prevent crime. This is one of the best kept secrets of modern life. Experts know it, the police know it, but the public does not know it. Yet the police pretend that they are society’s best defense against crime and continually argue that if they are given more resources, especially personnel, they will be able to protect communities against crime. This is a myth.
This quote is now paraded as backwards thinking, often presented before discussing the overall success of hot spots policing. If you didn’t read the book, you might come to the conclusion that this quote is a parallel to the nothing works mantra in corrections research. That take is not totally offbase: Police for the future was published in 1994, so it was just at the start of the CompStat revolution and hot spots policing. The evidence base was no doubt much thinner at that point and deserving of skepticism.
I don’t take the contents of David’s book as so hardlined on the stance that police cannot reduce crime, at least at the margins, as his opening quote suggests though. He has a chapter devoted to traditional police responses (crackdowns, asset forfeiture, stings, tracking chronic offenders), where he mostly expresses scientific skepticism of their effectiveness given their cost. He also discusses problem oriented approaches to solving crime problems, how to effectively measure police performance (outputs vs outcomes), and promotes evaluation research to see what works. Still all totally relevant twenty plus years later.
The greater context of David’s quote comes from his work examining police forces internationally. David was more concerned about professionalization of police forces. Part of this is better record keeping of crimes, and in the short term crime rates will often increase because of this. In class he mocked metrics used to score international police departments on professionalization that used crime as a measure that went into their final grade. He thought the function of the police was broader than reducing crime to zero.
I was in David’s last class he taught at Albany. The last day he sat on the desk at the front of the room and expressed doubt about whether he accomplished anything tangible in his career. This is the fate of most academics. Very few of us can point to direct changes anyone implemented in response to our work. Whether something works is independent of an evaluation I conduct to show it works. Even if a police department takes my advice about implementing some strategy, I am still only at best indirectly responsible for any crime reductions that follow. Nothing I could write would ever compete with pulling a single person from a burning car.
While David was being humble he was right. If I had to make a guess, I would say David’s greatest impact likely came about through his training of international police forces — which I believe spanned multiple continents and included doing work with the United Nations. (As opposed to saying something he wrote had some greater, tangible impact.) But even there if we went and tried to find direct evidence of David’s impact it would be really hard to put a finger on any specific outcome.
If a police department wanted to hire me, but I would be fired if I did not reduce crimes by a certain number within that first year, I would not take that job. I am confident that I can crunch numbers with the best of them, but given real constraints of police departments I would not take that bet. Despite devoting most of my career to studying policing interventions to reduce crime, even with the benefit of an additional twenty years of research, I’m not sure if David’s quote is as laughable as many of my peers frame it to be.
]]>The main benefit of posting preprints is to get your work more exposure. This occurs in two ways: one is that traditional peerreview work is often behind paywalls. This prevents the majority of nonacademics from accessing your work. This point about paywalls applies just the same to preventing other academics from reading your work in some cases. So while the prior blog post I linked by Laura Huey notes that you can get access to some journals through your local library, it takes several steps. Adding in steps you are basically losing out on some folks who don’t want to spend the time. Even through my university it is not uncommon for me to not be able to access a journal article. I can technically take the step of getting the article through interlibrary loan, but that takes more time. Time I am not going to spend unless I really want to see the contents of the article.
This I consider a minor benefit. Ultimately if you want your academic work to be more influential in the field you need to write about your work in nonacademic outlets (like magazines and newspapers) and present it directly to CJ practitioner audiences. But there are a few CJ folks who read journal articles you are missing, as well as a few academics who are missing your work because of that paywall.
A bigger benefit is actually that you get your work out much quicker. The academic publishing cycle makes it impossible to publish your work in a timely fashion. If you are lucky, once your paper is finished, it will be published in six months. More realistically it will be a year before it is published online in our field (my linked article only considers when it is accepted, tack on another month or two to go through copyediting).
Honestly, I publish preprints because I get really frustrated with waiting on peer review. No offense to my peers, but I do good work that I want others to read — I do not need a stamp from three anonymous reviewers to validate my work. I would need to do an experiment to know for sure (having a preprint might displace some views/downloads from the published version) but I believe the earlier and open versions on average doubles the amount of exposure my papers would have had compared to just publishing in traditional journals. It is likely a much different audience than traditional academic crim people, but that is a good thing.
But even without that extra exposure I would still post preprints, because it makes me happy to selfpublish my work when it is at the finish line, in what can be a miserably long and very much delayed gratification process otherwise.
Besides the actual time cost of posting a preprint (next section I will detail that more precisely, it isn’t much work), I will go through several common arguments why posting preprints are a bad idea. I don’t believe they carry much weight, and have not personally experienced any of them.
What if I am wrong — Typically I only post papers either when I am doing a talk, or when it is ready to go out for peer review. So I don’t encourage posting really early versions of work. While even at this stage there is never any guarantee you did not make a big mistake (I make mistakes all the time!), the sky will not fall down if you post a preprint that is wrong. Just take it down if you feel it is a net negative to the scholarly literature (which is very hard to do — the results of hypothesis tests do not make the work a net positive/negative). If you think it is good enough to send out for peer review it is definitely at the stage where you can share the preprint.
What if the content changes after peer review — My experience with peer review is mostly pedantic stuff — lit. review/framing complaints, do some robustness checks for analysis, beef up the discussion. I have never had a substantive interpretation change after peerreview. Even if you did, you can just update the preprint with the new results. While this could be bad (an early finding gets picked up that is later invalidated) this is again something very rare and a risk I am willing to take.
Note peer review is not infallible, and so hedging that peer review will catch your mistakes is mostly a false expectation. Peer review does not spin your work into gold, you have to do that yourself.
My ideas may get scooped — This I have never personally had happen to me. Posting a preprint can actually prevent this in terms of more direct plagiarism, as you have a timestamped example of your work. In terms of someone taking your idea and rewriting it, this is a potential risk (same risk if you present at a conference) — really only applicable for folks working on secondary data analysis. Having the preprint the other person should at least cite your work, but sorry, either presenting some work or posting a preprint does not give you sole ownership of an idea.
Journals will view preprints negatively — Or journals do not allow preprints. I haven’t come across a journal in our field that forbids preprints. I’ve had one reviewer note (out of likely 100+ at this point) that the preprint was posted as a negative (suggesting I was double publishing or plagiarizing my own work). An editor that actually reads reviews should know that is not a substantive critique. That was likely just a dinosaur reviewer that wasn’t familiar with the idea of preprints (and they gave an overall positive review in that one case, so did not get the paper axed). If you are concerned about this, just email the editor for feedback, but I’ve never had a problem from editors.
Peer reviewers will know who I am — This I admit is a known unknown. So peer review in our crim/cj journals are mostly doubly blind (most geography and statistic journals I have reviewed for are not, I know who the authors are). If you presented the work at a conference you have already given up anonymity, and also the field is small enough a good chunk of work the reviewers can guess who the author is anyway. So your anonymity is often a moot point at the peer review stage anyway.
So I don’t know how much reviewers are biased if they know who you are (it can work both ways, if you get a friend they may be more apt to give a nicer review). It likely can make a small difference at the margins, but again I personally don’t think the minor risk/cost outweighs the benefits.
These negatives are no doubt real, but again I personally find them minor enough risks to not outweigh the benefits of posting preprints.
All posting a preprint involves is uploading a PDF file of your work to either your website or a public hosting service. My workflow currently I have my different components of a journal article in several word documents (I don’t use LaTex very often). (Word doesn’t work so well when it has one big file, especially with many pictures.) So then I export those components to PDF files, and stitch them together using a freeware tool PDFtk. It has a GUI and command line, so I just have a bat file in my paper directory that lists something like:
pdftk.exe TitlePage.pdf MainPaper.pdf TablesGraphs.pdf Appendix.pdf cat output CombinedPaper.pdf
So just a double click to update the combined pdf when I edit the different components.
Public hosting services to post preprints I have used in the past are Academia.edu, SSRN, and SoxArXiv, although again you could just post the PDF on your webpage (and Google Scholar will eventually pick it up). I use SocArXiv now, as SSRN currently makes you sign up for an account to download PDFs (again a hurdle, the same as a going through interlibrary loan). Academia.edu also makes you sign up for an account, and has weird terms of service.
Here is an example paper of mine on SocArXiv. (Note the total downloads, most of my published journal articles have fewer than half that many downloads.) SocArXiv also does not bother my coauthors to create an account when I upload a paper. If we had a more criminal justice focused depository I would use that, but SocArXiv is fine.
There are other components of open science I should write about — such as replication materials/sharing data, and open peer reviewed journals, but I will leave those to another blog post. Posting preprints takes very little extra work compared to what academics are currently doing, so I hope more people in our field start doing it.
]]>
The linked paper does not provide data, so what I do for a similar illustration is grab the lower super output area crime stats from here, and use the 0817 data to predict homicides in 18Feb19. I’ve posted the SPSS code I used to do the data munging and graphs here — all the stats could be done in Excel though as well (just involves sorting, cumulative sums, and division). Note this is not quite a replication of the paper, as it includes all cases in the homicide/murder minor crime category, and not just knife crime. There ends up being a total of 147 homicides/murders from 2018 through Feb2019, so the nature of the task is very similar though, predicting a pretty rare outcome among almost 5,000 lower super output areas (4,831 to be exact).
So the first plot I like to make goes like this. Use whatever metric you want based on historical data to rank your areas. So here I used assaults from 0817. Sort the dataset in descending order based on your prediction. And then calculate the cumulative number of homicides. Then calculate two more columns; the total proportion of homicides your ranking captures given the total proportion of areas.
Easier to show than to say. So for reference your data might look something like below (pretend we have 100 homicides and 1000 areas for a simpler looking table):
PriorAssault CurrHom CumHom PropHom PropArea
1000 1 1 1/100 1/1000
987 0 1 1/100 2/1000
962 2 4 4/100 3/1000
920 1 5 5/100 4/1000
. . . . .
. . . . .
. . . . .
0 0 100 100/100 1000/1000
You would sort the PriorCrime
column, and then calculate CumHom
(Cumulative Homicides), PropHom
(Proportion of All Homicides) and PropArea
(Proportion of All Areas). Then you just plot the PropArea
on the X axis, and the PropHom
on the Y axis. Here is that plot using the London data.
Paul Ekblom suggests plotting the ROC curve, and I am too lazy now to show it, but it is very similar to the above graph. Basically you can do a weighted ROC curve (so predicting areas with more than 1 homicide get more weight in the graph). (See Mohler and Porter, 2018 for an academic reference to this point.)
Here is the weighted ROC curve that SPSS spits out, I’ve also superimposed the predictions generated via prior homicides. You can see that prior homicides as the predictor is very near the line of equality, suggesting prior homicides are no better than a coinflip, whereas using all prior assaults does alittle better job, although not great. SPSS gives the areaunderthecurve stat at 0.66 with a standard error of 0.02.
Note that the prediction can be anything, it does not have to be prior crimes. It could be predictions from a regression model (like RTM), see this paper of mine for an example.
So while these do an OK job of showing the overall predictive ability of whatever metric — here they show using assaults are better than random, it isn’t real great evidence that hot spots are the go to strategy. Hot spots policing relies on very targeted enforcement of a small number of areas. The ROC curve shows the entire area. If you need to patrol 1,000 LSOA’s to effectively capture enough crimes to make it worth your while I wouldn’t call that hot spots policing anymore, it is too large.
So another graph you can do is to just plot the cumulative number of crimes you capture versus the total number of areas. Note this is based on the same information as before (using rankings based on assaults), just we are plotting whole numbers instead of proportions. But it drives home the point abit better that you need to go to quite a large number of areas to be able to capture a substantive number of homicides. Here I zoom in the plot to only show the first 800 areas.
So even though the overall curve shows better than random predictive ability, it is unclear to me if a rare homicide event is effectively concentrated enough to justify hot spots policing. Better than random predictions are not necessarily good enough.
A final metric worth making note of is the Predictive Accuracy Index (PAI). The PAI is often used in evaluating forecast accuracy, see some of the work of Spencer Chainey or Grant Drawve for some examples. The PAI is simply % Crime Captured/% Area
, which we have already calculated in our prior graphs. So you want a value much higher than 1.
While those cited examples again use tables with simple cutoffs, you can make a graph like this to show the PAI metric under different numbers of areas, same as the above plots.
The sawtooth ends up looking very much like a precisionrecall curve, but I haven’t sat down and figured out the equivalence between the two as of yet. It is pretty noisy, but we might have two regimes based on this — target around 30 areas for a PAI of 35, or target 150 areas for a PAI of 3. PAI values that low are not something to brag to your grandma about though.
There are other stats like the predictive efficiency index (PAI vs the best possible PAI) and the recapturerate index that you could do the same types of plots with. But I don’t want to put everyone to sleep.
]]>I have too many PDFs to download them all manually (over 2,000), so I wrote a script in Python to download the PDFs. Unlike prior scraping examples I’ve written about, you need to have signed into your CiteULike account to be able to download the files. Hence I use the selenium library to mimic what you do normally in a webbrowser.
So let me know what bibliography manager I should switch to. Really one of the main factors will be if I can automate the conversion, including PDFs (even if it just means pointing to where the PDF is stored on my local machine).
This is a good tutorial to know about even if you don’t have anything to do with CiteULike. There are various web services that you need to sign in or mimic the browser like this to download data repeatedly, such as if a PD has a system where you need to input a set of dates to get back crime incidents (and limit the number returned, so you need to do it repeatedly to get a full sample). The selenium library can be used in a similar fashion to this tutorial in that circumstance.
]]>There are two reasons you might want to do this for crime analysis:
The second is actually more common in academic literature — see John Hipp’s Egohoods, or Liz Groff’s work on measuring nearby to bars, or Joel Caplan and using kernel density to estimate the effect of crime generators. Jerry Ratcliffe and colleagues work on the buffer intensity calculator is actually the motivation for the original request. So here are some quick code snippets in R to accomplish either. Here is the complete code and original data to replicate.
Here I use over 250,000 reported Part 1 crimes in DC from 08 through 2015, 173 school locations, and 21,506 street units (street segment midpoints and intersections) I constructed for various analyses in DC (all from open data sources) as examples.
First, lets define where our data is located and read in the CSV files (don’t judge me setting the directory, I do not use RStudio!)
MyDir < 'C:\\Users\\axw161530\\Dropbox\\Documents\\BLOG\\buffer_stuff_R\\Code' #Change to location on your machine!
setwd(MyDir)
CrimeData < read.csv('DC_Crime_08_15.csv')
SchoolLoc < read.csv('DC_Schools.csv')
Now there are several ways to do this, but here is the way I think will be most useful in general for folks in the crime analysis realm. Basically the workflow is this:
For the function to the distance there are a bunch of choices (see Jerry’s buffer intensity I linked to previously for some example discussion). I’ve written previously about using the bisquare kernel. So I will illustrate with that.
Here is an example for the first school record in the dataset.
#Example for crimes around school, weighted by Bisquare kernel
BiSq_Fun < function(dist,b){
ifelse(dist < b, ( 1  (dist/b)^2 )^2, 0)
}
S1 < t(SchoolLoc[1,2:3])
Dis < sqrt( (CrimeData$BLOCKXCOORD  S1[1])^2 + (CrimeData$BLOCKYCOORD  S1[2])^2 )
Wgh < sum( BiSq_Fun(Dis,b=2000) )
Then repeat that for all of the locations that you want the buffer intensities, and stuff it in the original SchoolLoc
data frame. (Takes less than 30 seconds on my machine.)
SchoolLoc$BufWeight < 1 #Initialize field
#Takes about 30 seconds on my machine
for (i in 1:nrow(SchoolLoc)){
S < t(SchoolLoc[i,2:3])
Dis < sqrt( (CrimeData$BLOCKXCOORD  S[1])^2 + (CrimeData$BLOCKYCOORD  S[2])^2 )
SchoolLoc[i,'BufWeight'] < sum( BiSq_Fun(Dis,b=2000) )
}
In this example there are 173 schools and 276,621 crimes. It is too big to create all of the pairwise comparisons at once (which will generate nearly 50 million records), but the looping isn’t too cumbersome and slow to worry about building a KDTree.
One thing to note about this technique is that if the buffers are large (or you have locations nearby one another), one crime can contribute to weighted crimes for multiple places.
To extend this idea to estimating attributes at places just essentially swaps out the crime locations with whatever you want to calculate, ala Liz Groff and her inverse distance weighted bars paper. I will show something alittle different though, in using the weights to create a weighted sum, which is related to John Hipp and Adam Boessen’s idea about Egohoods.
So here for every street unit I’ve created in DC, I want an estimate of the number of students nearby. I not only want to count the number of kids in attendance in schools nearby, but I also want to weight schools that are closer to the street unit by a higher amount.
So here I read in the street unit data. Also I do not have school attendance counts in this dataset, so I just simulate some numbers to illustrate.
StreetUnits < read.csv('DC_StreetUnits.csv')
StreetUnits$SchoolWeight < 1 #Initialize school weight field
#Adding in random school attendance
SchoolLoc$StudentNum < round(runif(nrow(SchoolLoc),100,2000))
Now it is very similar to the previous example, you just do a weighted sum of the attribute, instead of just counting up the weights. Here for illustration purposes I use a different weighting function, inverse distance weighting with a distance cutoff. (I figured this would need a better data management strategy to be timely, but this loop works quite fast as well, again under a minute on my machine.)
#Will use inverse distance weighting with cutoff instead of bisquare
Inv_CutOff < function(dist,cut){
ifelse(dist < cut, 1/dist, 0)
}
for (i in 1:nrow(StreetUnits)){
SU < t(StreetUnits[i,2:3])
Dis < sqrt( (SchoolLoc$XMeters  SU[1])^2 + (SchoolLoc$YMeters  SU[2])^2 )
Weights < Inv_CutOff(Dis,cut=8000)
StreetUnits[i,'SchoolWeight'] < sum( Weights*SchoolLoc$StudentNum )
}
The same idea could be used for other attributes, like sales volume for restaurants to get a measure of the business of the location (I think more recent work of John Hipp’s uses the number of employees).
Some attributes you may want to do the weighted mean instead of a weighted sum. For example, if you were using estimates of the proportion of residents in poverty, it makes more sense for this measure to be a spatially smoothed mean estimate than a sum. In this case it works exactly the same but you would replace sum( Weights*SchoolLoc$StudentNum )
with sum( Weights*SchoolLoc$StudentNum )/sum(Weights)
. (You could use the centroid of census block groups in place of the polygon data.)
Using these buffer weights really just swaps out one arbitrary decision for data analysis (the buffer distance) with another (the distance weighting function). Although the weighting function is more complicated, I think it is probably closer to reality for quite a few applications.
Many of these different types of spatial estimates are all related to another (kernel density estimation, geographically weighted regression, kriging). So there are many different ways that you could go about making similar estimates. Not letting the perfect be the enemy of the good, I think what I show here will work quite well for many crime analysis applications.
]]>The trend on the original count scale looks linear, but on the log scale the variance is much nicer. So I’m not sure what the best forecast would be.
I thought the demise had already started earlier in the year, as I actually saw the first yearoveryear decreases in June and July. But the views recovered in the following months.
So based on that the slow down in growth I think is a better bet than the linear projection.
For those interested in extending their reach, you should not only consider social media and creating a website/blog, but also writing up your work for a more general newspaper. I wrote an article for The Conversation about some of my work on officer involved shootings in Dallas, and that accumulated nearly 7,000 views within a week of it being published.
Engagement in a greater audience is very bursty. Looking at my statistics for particular articles, it doesn’t make much sense to report average views per day. I tend to get a ton of views on the first few days, and then basically nothing after that. So if I do the top posts by average views per day it is dominated by my more recent posts.
This is partly due to shares on Twitter, which drive short term views, but do not impact longer term views as far as I can tell. That is a popular post on Twitter does not appear to predict consistent views being referred via Google searches. In the past year I get a ratio of about 50~1 referrals from Google vs Twitter, and I did not have any posts that had a consistent number of views (most settle in at under 3 views per day after the initial wave). So basically all of my most viewed posts are the same as prior years.
Since I joined Twitter this year, I actually have made fewer blog posts. Not including this post, I’ve made 29 posts in 2018.
2011 5
2012 30
2013 40
2014 45
2015 50
2016 40
2017 35
2018 29
Some examples of substitution are tweets when a paper is published. I typically do a short write up when I post a working paper — there is not much point of doing another one when it is published online. (To date I have not had a working paper greatly change from the published version in content.) I generally just like sharing nice graphs I am working on. Here is an example of citations over time I just quickly published to Twitter, which was simpler than doing a whole blog post.
Since it is difficult to determine how much engagement I will get for any particular post, it is important to just keep plugging away. Twitter can help a particular post take off (see these examples I wrote about for the Cross Validated Blog), but any one tweet or blog post is more likely to be a dud than anything.
]]>