Could AI predict crime?Richard Baker/In Pictures/Getty By Chris BaraniukPolice in the UK want to predict serious violent crime using artificial intelligence, New Scientist can reveal. The idea is that individuals flagged by the system will be offered interventions, such as counselling, to avert potential criminal behaviour. However, one of the world’s leading data science institutes has…
Police in the UK want to predict serious violent crime using artificial intelligence,New Scientistcan reveal. The idea is that individuals flagged by the system will be offered interventions, such as counselling, to avert potential criminal behaviour.
However, one of the world’s leading data science institutes has expressed serious concerns about the project after seeing a redacted version of the proposals.
The system, called the National Data Analytics Solution (NDAS), uses a combination of AI and statistics to try to assess the risk of someone committing or becoming a victim of gun or knife crime, as well as the likelihood of someone falling victim to modern slavery.
<div id="video-mid-article" class="mpu"> </div> <p>West Midlands Police is leading the project and has until the end of March 2019 to produce a prototype. Eight other police forces, including London’s Metropolitan Police and Greater Manchester Police, are also involved. NDAS is being designed so that every police force in the UK could eventually use it.
Police funding has been cut significantly over recent years, so forces need a system that can look at all individuals already known to officers, with the aim of prioritising those who need interventions most urgently, says Iain Donnelly, the police lead on the project.
As for exactly what will happen when such individuals are identified, that is still a matter of discussion, says Donnelly. He says the intention isn’t to pre-emptively arrest anyone, but rather to provide support from local health or social workers. For example, they could offer counselling to any individual with a history of mental health issues that had been flagged by NDAS as being likely to commit a violent crime. Potential victims could be contacted by social services.
This is the first such project of its kind in the world, pooling multiple data sets from a number of police forces for crime prediction, says Donnelly. In the early phases, the team gathered more than a terabyte of data from local and national police databases, including records of people being stopped and searched and logs of crimes committed. Around 5 million individuals were identifiable from the data.
Looking at this data, the software found nearly 1400 indicators that could help predict crime, including around 30 that were particularly powerful. These included the number of crimes an individual had committed with the help of others and the number of crimes committed by people in that individual’s social group.
The machine learning component of NDAS will use these indicators to predict which individuals known to the police may be on a trajectory of violence similar to that observed in past cases, but who haven’t yet escalated their activity. Such people will be assigned a risk score indicating the likelihood of future offending.
Will it work?
West Midlands Police hopes to generate its first predictions using NDAS early next year. It will work with the UK’s data watchdog, the Information Commissioner’s Office, to ensure the NDAS meets privacy regulations.
However, aspects of the project have already drawn criticism. A team at the Alan Turing Institute in London saw a redacted version of the NDAS proposal last year and will publish their verdict on it in a report this week.
New Scientisthas seen that report. In it, the team says there are “serious ethical issues” with NDAS and questions whether it is in the public good to intervene pre-emptively when an individual may not have committed a crime or be likely to do so in the future. The researchers say that although the proposal is ethically well-intentioned overall, it fails to recognise important issues in full, and that inaccurate prediction is a concern.
Read more: Biased policing is made worse by errors in pre-crime algorithms
By basing predictions on records of past arrests, analytical tools run the risk of limiting police enquiries to well-trodden locations and can reinforce bias, says Andrew Ferguson at the University of the District of Columbia. Arrests correlate with where police are deployed and not where crime is, which tends to disproportionately affect people of colour and residents of poor neighbourhoods, he says.
Martin Innes, director of the Crime and Security Research Institute at Cardiff University, UK, says he is “sceptical” that the system will reliably predict offences at an individual level. The tool will probably be more useful for generally locating communities at risk, he says.
West Midlands Police has asked Innes and his colleagues to independently evaluate the effectiveness of NDAS at a later date.
An inherent difficulty with such systems is knowing whether the predictions would have turned out to be valid had police or other services not intervened, says Sandra Wachter at the Oxford Internet Institute. “How would I know that this actually makes the right decision? That’s something that is very hard to measure.”
The rise of predictive policing
Around the world, police are increasingly using data to predict crime. PredPol, developed at Santa Clara University in California, tries to identify future crime hotspots, for example. The system has been used both in the US and the UK.The Los Angeles police has a program that assigns a risk score based on traits such as whether they have previous convictions or are known members of a gang. Patrols are adjusted to keep a closer eye on the “riskiest” people.
The Netherlands uses another software tool that analyses crime data as well as social data in specific areas – such as people’s ages, their incomes and whether they claim benefits. This is used to predict where in a city specific types of crimes are more likely to occur.
Some applications have come in for condemnation, however. Earlier this year, Human Rights Watch criticised the Chinese authorities for allegedly using predictive policing to pre-emptively detain people in the province of Xinjiang.
<!-- ADD article topics at the end of the article --> <section class="article-topics"><p>More on these topics:</p><ul><li>artificial intelligence</li><li>technology</li></ul></section>