Tom Cruise, we’re still playing catch up. Cyber policing isn’t exactly at the level of Minority Report – yet – but it’s getting there.
One item of interest from the Pre-Crime Department: in spite of being criticized for over-policing of already heavily surveilled minority communities, an artificial intelligence (AI) platform is being trialled in police departments around the US that promises to predict where crime is going to occur.
The tool comes from a company called PredPol that claims that the software can algorithmically predict crime. As its training manuals show, the notion is based on the broken-windows approach to policing: a strategy of issuing citations for petty crime that doesn’t actually reduce crime but has been shown to damage the relationship between police and communities.
As Motherboard reported in June, PredPol says its software can predict which crimes will happen in areas as small as 500×500 feet, based on historical crime data. That data is fed into an algorithm that spits out predictions of where similar crimes will occur next.
Digital rights advocates say that predictive policing is inherently biased because the data used to make crime predictions is based on years of biased policing strategies that over-criminalize certain neighborhoods: Jake Ader, a contributor to the digital rights group Lucy Parsons Labs, filed a freedom of information request to find out how police in Elgin, Illinois, are using the tool. That’s how he found out that its training manual relies on the much-criticized broken-windows policing strategy.
Fast-forward a few months to this week, and the BBC reports that PredPol is digging itself into police departments across the US, with more than 50 departments now using it, as well as a handful of forces in the UK. Kent Constabulary, for one, claims that street violence fell by 6% following a four-month trial.
Steve Clark, deputy chief of Santa Cruz Police Department in California:
We found that the model was just incredibly accurate at predicting the times and locations where these crimes were likely to occur.
Those working in this space – besides PredPol, which stands for Predictive Policing, other companies include Palantir, CrimeScan and ShotSpotter Missions – say that the AI version of predictive policing beats traditional hot spot analysis, which involves reacting to whatever happened in an area previously, as opposed to anticipating what’s likely to happen in the future.
PredPol co-founder and anthropology professor Jeff Brantingham says that AI and machine learning can spot patterns that are too subtle for humans to pick up on:
Machine learning provides a suite of approaches to identifying statistical patterns in data that are not easily described by standard mathematical models, or are beyond the natural perceptual abilities of the human expert.
Maybe so. Still, studies don’t show that predictive policing has shown any results to brag about. John Hollywood, an analyst at policy research institution Rand Corporation, says that recent advances in analytical techniques have produced only “small, incremental” improvements in crime prediction. We’re talking about results that are 10-25% more accurate than traditional hot-spot mapping, he says:
Current technologies are not much more accurate than traditional methods.
It is enough to help improve deployment decisions, but is far from the popular hype of a computer telling officers where they can go to pick up criminals in the act.
But wait, there’s more: besides raising the suspicion of digital rights advocates and failing to impress Rand analysts, PredPol also managed to expose login pages for 17 US police departments on Tuesday morning, something it seems they failed to predict.
Police are using yet another AI tool to augment their human wetware. It’s called VeriPol: software that’s using text analysis and machine learning to identify fake police reports. Computer scientists at Cardiff University and the Charles III University of Madrid claim that VeriPol can identify false robbery reports “with over 80% accuracy.”
VeriPol has been rolled out throughout Spain to help support police officers and indicate where further investigations are necessary. Researchers trained it on over 1,000 proven false claims, teaching the AI to pick up on dubious police claims by using natural language processing.
Specifically, VeriPol uses algorithms to identify and quantify various features in text, such as adjectives, acronyms, verbs, nouns, punctuation marks and numbers and figures. The researchers fed VeriPol police reports that were known to be false so that the AI tool could code each one and begin to ‘learn’ the specific patterns.
Some patterns VeriPol picked up on from the false claims:
- Shorter statements that were more focused on the stolen property than the incident
- A lack of precise detail about the incident itself
- Limited details about a purported attacker
- Lack of witnesses or other hard evidence, such as contacting a police officer or doctor straight after the incident
Many false reports also claim loss of specific high-end items, according to Dr. Jose Collados, a research associate:
Clear indicators of falsehood were descriptions of the type of objects stolen. References to iPhones and Samsung were associated with false claims, whereas bicycles and necklaces were correlated with true reports.
The researchers put VeriPol to work in a real-world pilot study in the urban areas of Murcia and Malaga in Spain in June 2017. They report that in just one week, 25 cases of false robbery reports were detected in Murcia, resulting in the cases being closed, and a further 39 were detected and closed in Malaga.
For comparison’s sake, the average rate of false reports detected and cases closed before the use of VeriPol, in the month of June during the eight years between 2008 and 2016, was 3.33 in Murcia and 12.14 in Malaga.
Our study has given us a fascinating insight into how people lie to the police, and a tool that can be used to deter people from doing so in the future.
The ultimate hope is that by showing that automatic detection is possible, it will deter people from lying to the police in the first place, he said. As it is, filing false police statements is a crime that carries serious consequences, such as jail terms and heavy fines. As well, bogus police reports contaminate police databases, damage the outcomes of criminal investigations, and waste public resources that could be dedicated to pursuing other crimes, the researchers said.
TomiPol, can you please predictively tell me where I put my cellphone?
So, what do you do if you really, truly got mugged and lost an iPhone? Make sure to take note of as many details about the incident and the attacker as possible, and call police and/or get medical attention straight away. Coming off as fuzzy on specifics is clearly not the way to impress the AI interrogators who could be parsing your written statement!
Source : Naked Security