Knowing when to fold them: A quantitative approach to ending investigations

The recent work on investigations in the criminal justice field has my head turning about potential quantitative applications in this area (check out the John Eck & Kim Rossmo podcasts on Jerry’s site first, then check out the recent papers in Criminology and Public Policy on the topic for a start). One particular problem that was presented to me was detective case loads — detectives are humans, so can only handle so many cases at once. Triage is typically taken at the initial crime reporting stage, with inputs such as seriousness of the offense, the overall probability of the case being solved, and future dangerousness of folks involved being examples of what goes into that calculus to assign a case.

Here I wanted to focus on a different problem though — how long to keep cases open? There are diminishing returns to keeping cases open indefinitely, and so PDs should be able to right size the backend of detective open cases as well as the front end triaging. Here my suggested solution is to estimate a survival model of the probability of a case being solved, and then you can estimate an expected return on investment given the time you put in.

Here is a simplified example. Say the table below shows the (instantaneous) probability of a case being solved per weeks put into the investigation.

Week 1  20%
Week 2  10%
Week 3   5%
Week 4   3%
Week 5   1%

In survival model parlance, this would be the hazard function in discrete time increments. And then we have diminishing probabilities over time, which should also be true (e.g. a higher probability of being solved right away, and gets lower over time). The expected return of investigating this crime at time t is the cumulative probability of the crime being solved at time t, multiplied by whatever value you assign to the case being solved. The costs of investigating will be fixed (based on the detective salary), so is just a multiple of t*invest_costs.

So just to fill in some numbers, lets say that it costs the police department $1,000 a week to keep an investigation going. Also say a crime has a return of $10,000 if it is solved (the latter number will be harder to figure out in practice, as cost of crime estimates are not a perfect fit). So filling in our table, we have below our detective return on investment estimates (note that the cumulative probability of being solved is not simply the sum of the instantaneous probabilities, else it would eventually go over 100%). So return on investment (ROI), at week 1 is 10,000*0.2 = 2,000, at week 2 is 10,000*0.28 = 2,800, etc.

        h(t) solved%  cum-costs   ROI   
Week 1  20%    20%     1,000     2,000
Week 2  10%    28%     2,000     2,800
Week 3   5%    32%     3,000     3,200
Week 4   3%    33%     4,000     3,300
Week 5   1%    34%     5,000     3,400

So the cumulative costs outweigh the total detective resources devoted to the crime by Week 4 here. So in practice (in this hypothetical example) you may say to a detective you get 4 weeks to figure it out, if not solved by then it should be closed (but not cleared), and you should move onto other things. In the long run (I think) this strategy will make sure detective resources are balanced against actual cases solved.

This right sizes investigation lengths from a global perspective, but you also might consider whether to close a case on an individual case-by-case basis. In that case you wouldn’t calculate the sunk cost of the investigation so far, it is just the probability of the case being solved going forward relative to future necessary resources. (You do the same table, just start the cum-costs and solved percent columns from scratch whenever you are making that decision.)

In an actual applied setting, you can estimate the survival function however you want (e.g. you may want a cure mixture-model, so not all cases will result in 100% being solved given infinite time). It is also the case that different crimes will not only have different survival curves, but also will have different costs of crime (e.g. a murder has a greater cost to society than a theft) and probably different investigative resources needed (detective costs may also get lower over time, so are not constant). You can bake that all right into this estimate. So you may say the cost of a murder is infinite, and you should forever keep that case open investigating it. A burglary though may be a very short time interval before it should be dropped (but still have some initial investment).

Another neat application of this is that if you can generate reasonable returns to solving crimes, you can right size your overall detective bureau. That is you can make a quantitative argument I need X more detectives, and they will help solve Y more crimes resulting in Z return on investment. It may be we should greatly expand detective bureaus, but have them only keep many cases open a short time period. I’m thinking of the recent officer shortages in Dallas, where very few cases are assigned at all. (Some PDs have patrol officers take initial detective duties on the crime scene as well.)

There are definitely difficulties with applying this approach. One is that getting the cost of solving a crime estimate is going to be tough, and bridges both quantitative cost of crime estimates (although many of them are sunk costs after the crime has been perpetrated, arresting someone does not undo the bullet wound), likelihood of future reoffending, and ethical boundaries as well. If we are thinking about a detective bureau that is over-booked to begin with, we aren’t deciding on assigning individual cases at that point, but will need to consider pre-empting current investigations for new ones (e.g. if you drop case A and pick up case B, we have a better ROI). And that is ignoring the estimating survival part of different cases, which is tricky using observational data as well (selection biases in what cases are currently assigned could certainly make our survival curve estimates too low or too high).

This problem has to have been tackled in different contexts before (either by actuaries or in other business/medical contexts). I don’t know the best terms to google though to figure it out — so let me know in the comments if there is related work I should look into on solving this problem.

Actively monitoring place based crime interventions

Recently I presented my work (with Jerry Ratcliffe) on the Weighted Displacement Difference test at the New York State GIVE Symposium. My talk fit right in with what the folks at the American Society of Evidence Based Police discussed as well. In particular, Jason Potts and Jeremiah Johnson gave a talk about how officers can conduct their own experiments, and my work provides a simple tool to test if changes over time are significant or just due to chance.

There was one point of contention though between us — ASEBP folks advocate for the failing fast model of evaluation, whereas I advocated for planning more long term experiments. In particular, I suggest this chart to plan your experiments. So say if you have an area with only 10 crimes per month, I would suggest you should do the experiment for at least 4 months, so if what your are doing is 50% effective at reducing crime, you will conclude it has at least weak evidence of effectiveness using my WDD test. If you think 50% is too high of a bar, if you do it for 12 months it only needs to be alittle over 25% effective to tell if it is working.

The ideal behind failing fast and innovating I totally get, but whether or not we can actually see if something is effective in the short run with low baseline crime counts may be a road block to this idea in practice. Note that this is not me bagging on people doing experiments — what is most likely to happen if you do an experiment with low power is you will conclude it is not effective, even if it partially works. So I’m more concerned the BetaGov fail fast model is likely to throw out cost-effective interventions that don’t appear on their face to be effective, as opposed to false positives.1

Am I being too negative though? And also can we create a monitoring tool to give more immediate feedback — so instead of waiting a year and seeing the results, evaluating the efficacy of an intervention over time? To do this I am giving cusum charts a try, and did a little simulation to show how it might look in practice. SPSS Code to replicate the findings here.

So what I did was simulate a baseline control area with 10 crimes per time period, and a treated area that had a 20% reduction (so goes down to 8 crimes on average per time period). Here is the time series of those two areas, black line is the control area, and red line is the treated area. Time periods can be whatever you want (e.g. days, weeks, months), what matters is the overall average and the difference between the two series.

Based on this graph, you can’t really visually tell if the red treated area is doing any better than the black control area — they overlap too much. But we aren’t just interested in the effect for any one time period, but in the cumulative effect over time. To calculate that, you just subtract the black line from the red line, and take the cumulative sum of that difference. The next chart shows that statistic, along with 100 simulated lines showing what happens when you do the same cumulative statistic to data with no changes.

So you can see here that it takes about 13 time periods to show the cumulative effects are outside of the simulation boundaries, but you might conclude there is suggestive evidence of effectiveness after say 8+ time periods. Going further out it still shows the cumulative number of crimes prevented over the life of the intervention, so goes down to around 75 crimes prevented by 25 time periods.

The number of time periods necessary to show divergence is dependent on how effective the intervention is. So if we have the same baseline average of 10 crimes per time period, but the intervention is 50% effective (so reduces to an average of 5 crimes per time period), you can tell it diverges by period 6 in this second simulation example.

If we go back to my power chart I made for the WDD test, you can see that these effective time periods are close to my power chart suggestions for the weak evidence line. So this cusum approach is maybe slightly more diagnostic, and has the benefit you may be able to stop the experiment early if it is really effective.

You should still commit to running the experiment though for a set amount of time. The amount of time should be based on how effective you think the experiment should be, as well as cost-benefit analyses. If something costs alot of money (e.g. overtime) the effectiveness threshold to make it worthwhile in practice is a much higher bar than something that is closer to zero cost (such as shifting around current assignments). Given that ASEBP is advocating for lower level officers to experiment, they are more likely to be the latter type of low cost interventions.

There are still a few issues with using cusum charts like this though. One, this is very dependent on having a control area that is the same level of crime counts. So my WDD test only needs the parallel trends assumption, but this needs both the parallel trends and equal levels. (There are ways to get rid of the equal levels assumption though, one is to take the differences in differences and calculate those cusums over time.)

Another is that you need to reset cusum charts after a particular time period — you can see the simulations are random walks, and so grow much wider over time. I’m not sure at that point though if you should choose a new control area, or just stick with the prior one though. In the first example you can see the red line overestimates the effectiveness — the first chart the true estimate should be -2*time period (estimated -75 versus should be -50 after 25 time periods). For the second the true effect is -5*time period, so the estimate is a slight underestimate (estimated -100 versus should be -125 after 25 time periods).

But this is about the best meet in the middle of actively monitoring place based crime interventions and my advocacy for planning long term interventions that I can drum up for now. It is short term feedback, but you should be committed to running the experiment for a longer period of time. The sequential monitoring allows you to stop early if the intervention is really effective, see this example for A/B tests. But otherwise you are often better off just planning a long term intervention and not peek at the short term results.

Besides the technical stats portion of being able to tell if it diverges from a control area, it may also be behavioral, in that you need a longer period of time to generate deterrence, or for officers to effectively implement the strategy. You notice in these examples if you only did 5 time periods, they meander about 0 and so don’t appear to be effective. It takes longer time periods, even with the 50% effective intervention, to know if the intervention was effective given these low of baseline crime counts.


  1. Although with low power you do have issues with what Andrew Gelman calls type S (sign) and type M (magnitude) errors as well. That is even if you do think it is effective, you may think it is way more effective than it actually is in reality based on your noisy estimates. Or you conclude it has iatrogenic effects when it works in practice.

Optimal treatment assignment with network spillovers

Motivated by a recent piece by Wood and Papachristos (2019), (WP from here on) which finds if you treat an individual at high risk for gun shot victimization, they have positive spillover effects on individuals they are connected to. This creates a tricky problem in identifying the best individuals to intervene with given finite resources. This is because you may not want to just choose the people with the highest risk – the best bang for your buck will be folks who are some function of high risk and connected to others with high risk (as well as those in areas of the network not already treated).

For a simplified example consider the network below, with individuals baseline probabilities of future risk noted in the nodes. Lets say the local treatment effect reduces the probability to 0, and the spillover effect reduces the probability by half, and you can only treat 1 node. Who do you treat?

We could select the person with the highest baseline probability (B), and the reduced effect ends up being 0.5(B) + 0.1(E) = 0.6 (the 0.1 is for the spillover effect for E). We could choose node A, which is a higher baseline probability and has the most connections, and the reduced effect is 0.4(A) + 0.05(C) + 0.05(D) + 0.1(E) = 0.6. But it ends up in this network the optimal node to choose is E, because the spillovers to A and B justify choosing a lower probability individual, 0.2(E) + 0.2(A) + 0.25(B) = 0.65.

Using this idea of a local effect and a spillover effect, I formulated an integer linear program with the same idea of a local treatment effect and a spillover effect:

\text{Maximize} \{ \sum_{i = 1}^n (L_i\cdot p_{li} + S_i \cdot p_{si}) \}

Where p_{li} is the reduction in the probability due to the local effect, and p_{si} is the reduction in the probability due to the spillover effect. These probabilities are fixed values you know at the onset, e.g. estimated from some model like in Wheeler, Worden, and Silver (2019) (and Papachristos has related work using the network itself to estimate risk). Each node, i, then gets two decision variables; L_i will equal 1 if that node is treated, and S_i will equal 1 if the node gets a spillover effect (depending on who is treated). Actually the findings in WP show that these effects are not additive (you don’t get extra effects if you are treated and your neighbors are treated, or if you have multiple neighbors treated), and this makes it easier to keep the problem on the probability scale. So we then have our constraints:

  1. L_i , S_i \in \{ 0,1 \}
  2. \sum L_i = K
  3. S_i \leq 1 + -1\cdot L_i , \forall \text{ Node}_i
  4. \sum_{\text{neigh}(i)} L_j \geq S_i , \forall \text{ Node}_i

Constraint 1 is that these are binary 0/1 decision variables. Constraint 2 is we limit the number of people treated to K (a value that we choose). Constraint 3 ensures that if a local decision variable is set to 1, then the spillover variable has to be set to 0. If the local is 0, it can be either 0 or 1. Constraint 4 looks at the neighbor relations. For Node i, if any of its neighbors local treated decision variable is set to 1, the Spillover decision variable can be set to 1.

So in the end, if the number of nodes is n, we have 2*n decision variables and 2*n + 1 constraints, I find it easier just to look at code sometimes, so here is this simple network and problem formulated in python using networkx and pulp. (Here is a full file of the code and data used in this post.)

####################################################
import pulp
import networkx

Nodes = ['a','b','c','d','e']
Edges = [('a','c'),
         ('a','d'),
         ('a','e'),
         ('b','e')]

p_l = {'a': 0.4, 'b': 0.5, 'c': 0.1, 'd': 0.1,'e': 0.2}
p_s = {'a': 0.2, 'b': 0.25, 'c': 0.05, 'd': 0.05,'e': 0.1}
K = 1

G = networkx.Graph()
G.add_edges_from(Edges)

P = pulp.LpProblem("Choosing Network Intervention", pulp.LpMaximize)
L = pulp.LpVariable.dicts("Treated Units", [i for i in Nodes], lowBound=0, upBound=1, cat=pulp.LpInteger)
S = pulp.LpVariable.dicts("Spillover Units", [i for i in Nodes], lowBound=0, upBound=1, cat=pulp.LpInteger)

P += pulp.lpSum( p_l[i]*L[i] + p_s[i]*S[i] for i in Nodes)
P += pulp.lpSum( L[i] for i in Nodes ) == K

for i in Nodes:
    P += pulp.lpSum( S[i] ) = S[i]

P.solve()

#Should select e for local, and a & b for spillover
print(pulp.value(P.objective))
print(pulp.LpStatus[P.status])

for n in Nodes:
    print([n,L[n].varValue,S[n].varValue])
####################################################

And this returns the correct results, that node E is chosen in this example, and A and B have the spillover effects. In the linked code I provided a nicer function to just pipe in your network, your two probability reduction estimates, and the number of treated units, and it will pipe out the results for you.

For an example with a larger network for just proof of concept, I conducted the same analysis, choosing 20 people to treat in a network of 311 nodes I pulled from Rostami and Mondani (2015). I simulated some baseline probabilities to pipe in, and made it so the local treatment effect was a 50% reduction in the probability, and a spillover effect was a 20% reduction. Here red squares are treated, pink circles are the spill-over, and non-treated are grey circles. It did not always choose the locally highest probability (largest nodes), but did tend to choose highly connected folks also with a high probability (but also chose some isolate nodes with a high probability as well).

This problem is solved in an instant. And I think out of the box this will work for even large networks of say over 100,000 nodes (I have let CPLEX churn on problems with near half a million decision variables on my desktop overnight). I need to check myself to make 100% sure though. A simple way to make the problem smaller if needed though is to conduct the analysis on subsets of connected components, and then shuffle the results back together.

Looking at the results, it is very similar to my choosing representatives work (Wheeler et al., 2019), and I think you could get similar results with just piping in 1’s for each of the local and spillover probabilities. One of the things I want to work on going forward though is treatment non-compliance. So if we are talking about giving some of these folks social services, they don’t always take up your offer (this is a problem in choose rep’s for call ins as well). WP actually relied on this to draw control nodes in their analysis. I thought for a bit the problem with treatment non-compliance in this setting was intractable, but another paper on a totally different topic (Bogle et al., 2019) has given me some recent hope that it can be solved.

This same idea is also is related to hot spots policing (think spatial diffusion of benefits). And I have some ideas about that to work on in the future as well (e.g. how wide of net to cast when doing hot spots interventions given geographical constraints).

References

  • Bogle, J., Bhatia, N., Ghobadi, M., Menache, I., Bjørner, N., Valadarsky, A., & Schapira, M. (2019). TEAVAR: striking the right utilization-availability balance in WAN traffic engineering. In Proceedings of the ACM Special Interest Group on Data Communication (pp. 29-43).
  • Rostami, A., & Mondani, H. (2015). The complexity of crime network data: A case study of its consequences for crime control and the study of networks. PloS ONE, 10(3), e0119309.
  • Wheeler, A. P., McLean, S. J., Becker, K. J., & Worden, R. E. (2019). Choosing Representatives to Deliver the Message in a Group Violence Intervention. Justice Evaluation Journal, Online First.
  • Wheeler, A. P., Worden, R. E., & Silver, J. R. (2019). The Accuracy of the Violent Offender Identification Directive Tool to Predict Future Gun Violence. Criminal Justice and Behavior, 46(5), 770-788.
  • Wood, G., & Papachristos, A. V. (2019). Reducing gunshot victimization in high-risk social networks through direct and spillover effects. Nature Human Behaviour, 1-7.

 

Finding the dominant set in a network (python)

My paper, Choosing representatives to deliver the message in a group violence intervention, is now published online at the Justice Evaluation Journal. For those who don’t have access to that journal, here is a link good for 50 e-prints (for a limited time), and here is a pre-print version, and you can always send me an email for the published copy.

I’ve posted Python code to replicate the analysis, including the original network nodes and edges group data. I figured I would go through a quick example of applying the code for others to use the algorithm.

The main idea is that for a focused deterrence initiative, for the call-ins you want to identify folks to spread the deterrence message around the network. When working with several PDs I figured looking at who was called in would be interesting. Literally the first network graph I drew was below on the left — folks who were called in are the big red squares. This was one of the main problem gangs, and the PD had done several call-ins for over a year at this point. Those are not quite the worst set of four folks to call-in based on the topology of the network, but damn close.

But to criticize the PD I need to come up with a better solution — which is the graph on the right hand side. The larger red squares are my suggested call-ins, and they reach everyone within one step. That means everyone is at most just one link away from someone who attended the call-in. This is called a dominant set of a graph when all of the graph is colored in.

Below I give a quicker example using my code for others to generate the dominant set (instead of going through all of the replication analysis). If you are a PD interested in applying this for your focused deterrence initiative let me know!


So first to set up your python code, I import all of the needed libraries (only non-standard is networkx). Then I import my set of functions, named MyFunctions.py, and then change the working directory.

############################################################
#The libraries I need

import itertools
import networkx as nx
import csv
import sys
import os

#Now importing my own functions I made
locDir = r'C:\Users\axw161530\Dropbox\Documents\BLOG\DominantSet_Python'
sys.path.append(locDir)
from MyFunctions import *

#setting the working directory to this location
os.chdir(locDir)
#print(os.getcwd())
############################################################

The next part I read in the CSV data for City 4 Gang 1, both the nodes and the edges. Then I create a networkx graph simply based on the edges. Technically I do not use the node information at all for this, just the edges that list a source and a target.

############################################################
#Reading in the csv files that have the nodes and the edges
#And turning into a networkX graph

#simple function to read in csv files
def ReadCSV(loc):
    tup = []
    with open(loc) as f:
        z = csv.reader(f)
        for row in z:
            tup.append(tuple(row))
    return tup
            
#Turning my csv files into networkx objects

nd = ReadCSV('Nodes_City4_Gang1.csv')
ed = ReadCSV('Edges_City4_Gang1.csv')
head_node = nd.pop(0) #First row for both is a header
head_edge = ed.pop(0)

#Turning my csv files into networkx objects
C1G4 = nx.Graph()
C1G4.add_edges_from(ed)
############################################################

Now it is quite simple, to get my suggested dominant set it is simple as this function call:

ds_C1G4 = domSet_Whe(C1G4)
print(ds_C1G4)

In my current session this gives the edges ['21', '18', '17', '16', '3', '22', '20', '6']. Which if you look to my original graph is somewhat different, but all are essentially single swaps where the best node to choose is arbitrary.

I have a bunch of other functions in the analysis, one of interest will be given who is under probation/parole who are the best people to call in (see the domSet_WheSub function). Again if you are interested in pursuing this further always feel free to reach out to me.

David Bayley

David Bayley is most known in my research area, policing interventions to reduce crime, based on this opening paragraph in Police for the future:

The police do not prevent crime. This is one of the best kept secrets of modern life. Experts know it, the police know it, but the public does not know it. Yet the police pretend that they are society’s best defense against crime and continually argue that if they are given more resources, especially personnel, they will be able to protect communities against crime. This is a myth.

This quote is now paraded as backwards thinking, often presented before discussing the overall success of hot spots policing. If you didn’t read the book, you might come to the conclusion that this quote is a parallel to the nothing works mantra in corrections research. That take is not totally off-base: Police for the future was published in 1994, so it was just at the start of the CompStat revolution and hot spots policing. The evidence base was no doubt much thinner at that point and deserving of skepticism.

I don’t take the contents of David’s book as so hardlined on the stance that police cannot reduce crime, at least at the margins, as his opening quote suggests though. He has a chapter devoted to traditional police responses (crackdowns, asset forfeiture, stings, tracking chronic offenders), where he mostly expresses scientific skepticism of their effectiveness given their cost. He also discusses problem oriented approaches to solving crime problems, how to effectively measure police performance (outputs vs outcomes), and promotes evaluation research to see what works. Still all totally relevant twenty plus years later.

The greater context of David’s quote comes from his work examining police forces internationally. David was more concerned about professionalization of police forces. Part of this is better record keeping of crimes, and in the short term crime rates will often increase because of this. In class he mocked metrics used to score international police departments on professionalization that used crime as a measure that went into their final grade. He thought the function of the police was broader than reducing crime to zero.


I was in David’s last class he taught at Albany. The last day he sat on the desk at the front of the room and expressed doubt about whether he accomplished anything tangible in his career. This is the fate of most academics. Very few of us can point to direct changes anyone implemented in response to our work. Whether something works is independent of an evaluation I conduct to show it works. Even if a police department takes my advice about implementing some strategy, I am still only at best indirectly responsible for any crime reductions that follow. Nothing I could write would ever compete with pulling a single person from a burning car.

While David was being humble he was right. If I had to make a guess, I would say David’s greatest impact likely came about through his training of international police forces — which I believe spanned multiple continents and included doing work with the United Nations. (As opposed to saying something he wrote had some greater, tangible impact.) But even there if we went and tried to find direct evidence of David’s impact it would be really hard to put a finger on any specific outcome.

If a police department wanted to hire me, but I would be fired if I did not reduce crimes by a certain number within that first year, I would not take that job. I am confident that I can crunch numbers with the best of them, but given real constraints of police departments I would not take that bet. Despite devoting most of my career to studying policing interventions to reduce crime, even with the benefit of an additional twenty years of research, I’m not sure if David’s quote is as laughable as many of my peers frame it to be.

Plotting Predictive Crime Curves

Writing some notes on this has been in the bucket list for a bit, how to evaluate crime prediction models. A recent paper on knife homicides in London is a good use case scenario for motivation. In short, when you have continuous model predictions, there are a few different graphs I would typically like to see, in place of accuracy tables.

The linked paper does not provide data, so what I do for a similar illustration is grab the lower super output area crime stats from here, and use the 08-17 data to predict homicides in 18-Feb19. I’ve posted the SPSS code I used to do the data munging and graphs here — all the stats could be done in Excel though as well (just involves sorting, cumulative sums, and division). Note this is not quite a replication of the paper, as it includes all cases in the homicide/murder minor crime category, and not just knife crime. There ends up being a total of 147 homicides/murders from 2018 through Feb-2019, so the nature of the task is very similar though, predicting a pretty rare outcome among almost 5,000 lower super output areas (4,831 to be exact).

So the first plot I like to make goes like this. Use whatever metric you want based on historical data to rank your areas. So here I used assaults from 08-17. Sort the dataset in descending order based on your prediction. And then calculate the cumulative number of homicides. Then calculate two more columns; the total proportion of homicides your ranking captures given the total proportion of areas.

Easier to show than to say. So for reference your data might look something like below (pretend we have 100 homicides and 1000 areas for a simpler looking table):

 PriorAssault  CurrHom CumHom PropHom PropArea
 1000          1         1      1/100    1/1000
  987          0         1      1/100    2/1000
  962          2         4      4/100    3/1000
  920          1         5      5/100    4/1000
    .          .         .       .        .
    .          .         .       .        .
    .          .         .       .        .
    0          0       100    100/100 1000/1000

You would sort the PriorCrime column, and then calculate CumHom (Cumulative Homicides), PropHom (Proportion of All Homicides) and PropArea (Proportion of All Areas). Then you just plot the PropArea on the X axis, and the PropHom on the Y axis. Here is that plot using the London data.

Paul Ekblom suggests plotting the ROC curve, and I am too lazy now to show it, but it is very similar to the above graph. Basically you can do a weighted ROC curve (so predicting areas with more than 1 homicide get more weight in the graph). (See Mohler and Porter, 2018 for an academic reference to this point.)

Here is the weighted ROC curve that SPSS spits out, I’ve also superimposed the predictions generated via prior homicides. You can see that prior homicides as the predictor is very near the line of equality, suggesting prior homicides are no better than a coin-flip, whereas using all prior assaults does alittle better job, although not great. SPSS gives the area-under-the-curve stat at 0.66 with a standard error of 0.02.

Note that the prediction can be anything, it does not have to be prior crimes. It could be predictions from a regression model (like RTM), see this paper of mine for an example.

So while these do an OK job of showing the overall predictive ability of whatever metric — here they show using assaults are better than random, it isn’t real great evidence that hot spots are the go to strategy. Hot spots policing relies on very targeted enforcement of a small number of areas. The ROC curve shows the entire area. If you need to patrol 1,000 LSOA’s to effectively capture enough crimes to make it worth your while I wouldn’t call that hot spots policing anymore, it is too large.

So another graph you can do is to just plot the cumulative number of crimes you capture versus the total number of areas. Note this is based on the same information as before (using rankings based on assaults), just we are plotting whole numbers instead of proportions. But it drives home the point abit better that you need to go to quite a large number of areas to be able to capture a substantive number of homicides. Here I zoom in the plot to only show the first 800 areas.

So even though the overall curve shows better than random predictive ability, it is unclear to me if a rare homicide event is effectively concentrated enough to justify hot spots policing. Better than random predictions are not necessarily good enough.

A final metric worth making note of is the Predictive Accuracy Index (PAI). The PAI is often used in evaluating forecast accuracy, see some of the work of Spencer Chainey or Grant Drawve for some examples. The PAI is simply % Crime Captured/% Area, which we have already calculated in our prior graphs. So you want a value much higher than 1.

While those cited examples again use tables with simple cut-offs, you can make a graph like this to show the PAI metric under different numbers of areas, same as the above plots.

The saw-tooth ends up looking very much like a precision-recall curve, but I haven’t sat down and figured out the equivalence between the two as of yet. It is pretty noisy, but we might have two regimes based on this — target around 30 areas for a PAI of 3-5, or target 150 areas for a PAI of 3. PAI values that low are not something to brag to your grandma about though.

There are other stats like the predictive efficiency index (PAI vs the best possible PAI) and the recapture-rate index that you could do the same types of plots with. But I don’t want to put everyone to sleep.

Weighted buffers in R

Had a request not so recently about implementing weighted buffer counts. The idea behind a weighted buffer is that instead of say counting the number of crimes that happen within 1,000 meters of a school, you want to give events that are closer to the school more weight.

There are two reasons you might want to do this for crime analysis:

  • You want to measure the amount of crime around a location, but you rather have a weighted crime count, where crimes closer to the location have a greater weight than those further away.
  • You want to measure attributes nearby a location (so things that predict crime), but give a higher weight to those closer to a location.

The second is actually more common in academic literature — see John Hipp’s Egohoods, or Liz Groff’s work on measuring nearby to bars, or Joel Caplan and using kernel density to estimate the effect of crime generators. Jerry Ratcliffe and colleagues work on the buffer intensity calculator is actually the motivation for the original request. So here are some quick code snippets in R to accomplish either. Here is the complete code and original data to replicate.

Here I use over 250,000 reported Part 1 crimes in DC from 08 through 2015, 173 school locations, and 21,506 street units (street segment midpoints and intersections) I constructed for various analyses in DC (all from open data sources) as examples.

Example 1: Crime Buffer Intensities Around Schools

First, lets define where our data is located and read in the CSV files (don’t judge me setting the directory, I do not use RStudio!)

MyDir <- 'C:\\Users\\axw161530\\Dropbox\\Documents\\BLOG\\buffer_stuff_R\\Code' #Change to location on your machine!
setwd(MyDir)

CrimeData <- read.csv('DC_Crime_08_15.csv')
SchoolLoc <- read.csv('DC_Schools.csv')

Now there are several ways to do this, but here is the way I think will be most useful in general for folks in the crime analysis realm. Basically the workflow is this:

  • For a given school, calculate the distance between all of the crime points and that school
  • Apply whatever function to that distance to get your weight
  • Sum up your weights

For the function to the distance there are a bunch of choices (see Jerry’s buffer intensity I linked to previously for some example discussion). I’ve written previously about using the bi-square kernel. So I will illustrate with that.

Here is an example for the first school record in the dataset.

#Example for crimes around school, weighted by Bisquare kernel
BiSq_Fun <- function(dist,b){
    ifelse(dist < b, ( 1 - (dist/b)^2 )^2, 0)
    }

S1 <- t(SchoolLoc[1,2:3])
Dis <- sqrt( (CrimeData$BLOCKXCOORD - S1[1])^2 + (CrimeData$BLOCKYCOORD - S1[2])^2 )
Wgh <- sum( BiSq_Fun(Dis,b=2000) )

Then repeat that for all of the locations that you want the buffer intensities, and stuff it in the original SchoolLoc data frame. (Takes less than 30 seconds on my machine.)

SchoolLoc$BufWeight <- -1 #Initialize field

#Takes about 30 seconds on my machine
for (i in 1:nrow(SchoolLoc)){
  S <- t(SchoolLoc[i,2:3])
  Dis <- sqrt( (CrimeData$BLOCKXCOORD - S[1])^2 + (CrimeData$BLOCKYCOORD - S[2])^2 )
  SchoolLoc[i,'BufWeight'] <- sum( BiSq_Fun(Dis,b=2000) )
}

In this example there are 173 schools and 276,621 crimes. It is too big to create all of the pairwise comparisons at once (which will generate nearly 50 million records), but the looping isn’t too cumbersome and slow to worry about building a KDTree.

One thing to note about this technique is that if the buffers are large (or you have locations nearby one another), one crime can contribute to weighted crimes for multiple places.

Example 2: Weighted School Counts for Street Units

To extend this idea to estimating attributes at places just essentially swaps out the crime locations with whatever you want to calculate, ala Liz Groff and her inverse distance weighted bars paper. I will show something alittle different though, in using the weights to create a weighted sum, which is related to John Hipp and Adam Boessen’s idea about Egohoods.

So here for every street unit I’ve created in DC, I want an estimate of the number of students nearby. I not only want to count the number of kids in attendance in schools nearby, but I also want to weight schools that are closer to the street unit by a higher amount.

So here I read in the street unit data. Also I do not have school attendance counts in this dataset, so I just simulate some numbers to illustrate.

StreetUnits <- read.csv('DC_StreetUnits.csv')
StreetUnits$SchoolWeight <- -1 #Initialize school weight field

#Adding in random school attendance
SchoolLoc$StudentNum <- round(runif(nrow(SchoolLoc),100,2000)) 

Now it is very similar to the previous example, you just do a weighted sum of the attribute, instead of just counting up the weights. Here for illustration purposes I use a different weighting function, inverse distance weighting with a distance cut-off. (I figured this would need a better data management strategy to be timely, but this loop works quite fast as well, again under a minute on my machine.)

#Will use inverse distance weighting with cut-off instead of bi-square
Inv_CutOff <- function(dist,cut){
    ifelse(dist < cut, 1/dist, 0)
}

for (i in 1:nrow(StreetUnits)){
    SU <- t(StreetUnits[i,2:3])
    Dis <- sqrt( (SchoolLoc$XMeters - SU[1])^2 + (SchoolLoc$YMeters - SU[2])^2 )
    Weights <- Inv_CutOff(Dis,cut=8000)
    StreetUnits[i,'SchoolWeight'] <- sum( Weights*SchoolLoc$StudentNum )
}   

The same idea could be used for other attributes, like sales volume for restaurants to get a measure of the business of the location (I think more recent work of John Hipp’s uses the number of employees).

Some attributes you may want to do the weighted mean instead of a weighted sum. For example, if you were using estimates of the proportion of residents in poverty, it makes more sense for this measure to be a spatially smoothed mean estimate than a sum. In this case it works exactly the same but you would replace sum( Weights*SchoolLoc$StudentNum ) with sum( Weights*SchoolLoc$StudentNum )/sum(Weights). (You could use the centroid of census block groups in place of the polygon data.)

Some Wrap-Up

Using these buffer weights really just swaps out one arbitrary decision for data analysis (the buffer distance) with another (the distance weighting function). Although the weighting function is more complicated, I think it is probably closer to reality for quite a few applications.

Many of these different types of spatial estimates are all related to another (kernel density estimation, geographically weighted regression, kriging). So there are many different ways that you could go about making similar estimates. Not letting the perfect be the enemy of the good, I think what I show here will work quite well for many crime analysis applications.

Reasons Police Departments Should Consider Collaborating with Me

Much of my academic work involves collaborating and consulting with police departments on quantitative problems. Most of the work I’ve done so far is very ad-hoc, through either the network of other academics asking for help on some project or police departments cold contacting me directly.

In an effort to advertise a bit more clearly, I wrote a page that describes examples of prior work I have done in collaboration with police departments. That discusses what I have previously done, but doesn’t describe why a police department would bother to collaborate with me or hire me as a consultant. In fact, it probably makes more sense to contact me for things no one has previously done before (including myself).

So here is a more general way to think about (from a police departments or criminal justice agencies perspective) whether it would be beneficial to reach out to me.

Should I do X?

So no one is going to be against different evidence based policing practices, but not all strategies make sense for all jurisdictions. For example, while focussed deterrence has been successfully applied in many different cities, if you do not have much of a gang violence problem it probably does not make sense to apply that strategy in your jurisdiction. Implementing any particular strategy should take into consideration the cost as well as the potential benefits of the program.

Should I do X may involve more open ended questions. I’ve previously conducted in person training for crime analysts that goes over various evidence based practices. It also may involve something more specific, such as should I redistrict my police beats? Or I have a theft-from-vehicle problem, what strategies should I implement to reduce them?

I can suggest strategies to implement, or conduct cost-benefit analysis as to whether a specific program is worth it for your jurisdiction.

I want to do X, how do I do it?

This is actually the best scenario for me. It is much easier to design a program up front that allows a police department to evaluate its efficacy (such as designing a randomized trial and collecting key measures). I also enjoy tackling some of the nitty-gritty problems of implementing particular strategies more efficiently or developing predictive instruments.

So you want to do hotspots policing? What strategies do you want to do at the hotspots? How many hotspots do you want to target? Those are examples of where it would make sense to collaborate with me. Pretty much all police departments should be doing some type of hot spots policing strategy, but depending on your particular problems (and budget constraints), it will change how you do your hot spots. No budget doesn’t mean you can’t do anything — many strategies can be implemented by shifting your current resources around in particular ways, as opposed to paying for a special unit.

If you are a police department at this stage I can often help identify potential grant funding sources, such as the Smart Policing grants, that can be used to pay for particular elements of the strategy (that have a research component).

I’ve done X, should I continue to do it?

Have you done something innovative and want to see if it was effective? Or are you putting a bunch of money into some strategy and are skeptical it works? It is always preferable to design a study up front, but often you can conduct pretty effective post-hoc analysis using quasi-experimental methods to see if some crime reduction strategy works.

If I don’t think you can do a fair evaluation I will say so. For example I don’t think you can do a fair evaluation of chronic offender strategies that use officer intel with matching methods. In that case I would suggest how you can do an experiment going forward to evaluate the efficacy of the program.

Mutual Benefits of Academic-Practitioner Collaboration

Often I collaborate with police departments pro bono — which you may ask what is in it for me then? As an academic I get evaluated mostly by my research productivity, which involves writing peer reviewed papers and getting research grants. So money is not the main factor from my perspective. It is typically easier to write papers about innovative problems or programs. If it involves applying for a grant (on a project I am interested in) I will volunteer my services to help write the grant and design the study.

I could go through my career writing papers without collaborating with police departments. But my work with police departments is more meaningful. It is not zero-sum, I tend to get better ideas when understanding specific agencies problems.

So get in touch if you think I can help your agency!

CAN SEBP webcast on predictive policing

I was recently interviewed for a webcast by the Canadian Society of Evidence Based Policing on Predictive Policing.

I am not directly affiliated with any software vendor, so these are my opinions as an outsider, academic, and regular consultant for police departments on quantitative problems.

I do have some academic work on predictive policing applications that folks can peruse at the moment (listed below). The first is on evaluating the accuracy of a people predictions, the second is for addressing the problem of disproportionate minority contact in spatial predictive systems.

  • Wheeler, Andrew P., Robert E. Worden, and Jasmine R. Silver. (2018) The predictive accuracy of the Violent Offender Identification Directive (VOID) tool. Conditionally accepted at Criminal Justice and Behavior. Pre-print available here.
  • Wheeler, Andrew P. (2018) Allocating police resources while limiting racial inequality. Pre-print available here.

I have some more work on predictive policing applications in the pipeline, so just follow the blog or follow me on Twitter for updates about future work.

If police departments are interested in predictive policing applications and would like to ask me some questions, always feel free to get in contact. (My personal email is listed on my CV, my academic email is just Andrew.Wheeler at utdallas.edu.)

Most of my work consulting with police departments is ad-hoc (and much of it is pro bono), so if you think I can be of help always feel free to get in touch. Either for developing predictive applications or evaluating whether they are effective at achieving the outcomes you are interested in.

Monitoring Use of Force in New Jersey

Recently ProPublica published a map of uses-of-force across different jurisdictions in New Jersey. Such information can be used to monitor whether agencies are overall doing a good or bad job.

I’ve previously discussed the idea of using funnel charts to spot outliers, mostly around homicide rates but the idea is the same when examining any type of rate. For example in another post I illustrated its use for examining rates of officer involved shootings.

Here is another example applying it to lesser uses of force in New Jersey. Below is the rate of use of force reports per the total number of arrests. (Code to replicate at the end of the post.)

The average use of force per arrests in the state is around 3%. So the error bars show relative to the state average. Here is an interactive chart in which you can use tool tips to see the individual jurisdictions.

Now the original press release noted by Seth Stoughton on twitter noted that several towns have ratio’s of black to white use of force that are very high. Scott Wolfe suspected that was partly a function of smaller towns will have more variable rates. Basically as one is comparing the ratio between two rates with error, the error bars around the rate ratio will also be quite large.

Here is the chart showing the same type of funnel around the rate ratio of black to white use-of-force relative to the average over the whole sample (the black percent use of force is 3.2 percent of arrests, and the white percent use of force is 2.4, and the rate ratio between the two is 1.35). I show in the code how I constructed this, which I should write a blog post about itself, but in short there are decisions I could make to make the intervals wider. So the points that are just slightly above a ratio of 2 at around 10,000 arrests are arguably not outliers, those more to the top-right of the plot though are much better evidence. (I’d note that if one group is very small, you could always make these error bars really large, so to construct them you need to make reasonable assumptions about the size of the two groups you are comparing.)

And here is another interactive chart in which you can view the outliers again. The original press release, Millville, Lakewood, and South Orange are noted as outliers. Using arrests as the denominator instead of population, they each have a rate ratio of around 2. In this chart Millville and Lakewood are outside the bounds, but just barely. South Orange is within the bounds. So those aren’t the places I would have called out according to this chart.

That same twitter thread other folks noted the potential reliability/validity of such data (Pete Moskos and Kyle McLean). These charts cannot say why individual agencies are outliers — either high or low. It could be their officers are really using force at different rates, it could also be though they are using different definitions to reporting force. There are also potential other individual explanations that explain the use of force distribution as well as the ratio differences in black vs white — no doubt policing in Princeton vs Camden are substantively different. Also even if all individual agencies are doing well, it does not mean there are no potential problem officers (as noted by David Pyrooz, often a few officers contribute to most UoF).

Despite these limitations, I still think there is utility in this type of monitoring though. It is basically a flag to dig deeper when anomalous patterns are spotted. Those unaccounted for factors contribute to more points being pushed outside of my constructed limits (overdispersion), but more clearly indicate when a pattern is so far outside the norm of what is expected the public deserves some explanation of the pattern. Also it highlights when agencies are potentially doing good, and so can be promoted according to their current practices.

This is a terrific start to effectively monitoring police agencies by ProPublica — state criminal justice agencies should be doing this themselves though.

Here is the code to replicate the analysis.