According to the developers of PredPol, their algorithm uses only three pieces of data to make predictions: the type of historical crime, the place it happened and when it happened. It doesn’t explicitly include any personal information – like race or gender – that could directly bias results against certain groups.

Using the PredPol algorithm, Lum and Isaac predicted where drug crimes would have been expected to occur in 2011. They also calculated the actual distribution of drug crimes that year – including those that went unreported – using data from the National Survey on Drug Use and Health. If the algorithm’s predictions were accurate, they would have expected it to flag up the areas where the crimes actually happened. But instead, it seemed to point mostly to areas where arrests had previously occurred. The pair noted that this could produce a feedback loop between understanding and controlling crime. ‘Because these predictions are likely to over-represent areas that were already known to police, officers become increasingly likely to patrol these same areas and observe new criminal acts that confirm their prior beliefs regarding the distributions of criminal activity.’[82]

Some people criticised the analysis, arguing that police didn’t use Predpol to predict drug crimes. However, Lum said that this is missing the wider point because the aim of predictive policing methods is to make decisions more objective. ‘The implicit argument is that you want to remove human bias from the system.’ If predictions reflect existing police behaviour, however, these biases will persist, hidden behind a veil of a supposedly objective algorithm. ‘When you’re training it with data that’s generated by the same system in which minority people are more likely to be arrested for the same behaviour, you’re just going to perpetuate those same issues,’ she said. ‘You have the same problems, but now filtered through this high-tech tool.’

Crime algorithms have more limitations than people might think. In 2013, researchers at RAND Corporation outlined four common myths about predictive policing.[83] The first was that a computer knows exactly what will happen in the future. ‘These algorithms predict the risk of future events, not the events themselves,’ they noted. The second myth was that a computer would do everything, from collecting relevant crime data to making appropriate recommendations. In reality, computers work best when they assist human analysis and decisions about policing, rather than replacing them entirely. The third myth was that police forces needed a high-powered model to make good predictions, whereas often the problem is getting hold of the right data. ‘Sometimes you have a dataset where the information you need to make the prediction just isn’t contained in that dataset,’ as Lum put it.

The final, and perhaps most persistent myth, was that accurate predictions automatically lead to reductions in crime. ‘Predictions, on their own, are just that – predictions,’ wrote the RAND team. ‘Actual decreases in crime require taking action based on those predictions.’ To control crime, agencies therefore need to focus on interventions and prevention rather than simply making predictions. This is true for other outbreaks too. According to Chris Whitty, now the Chief Medical Officer for England, the best mathematical models are not necessarily the ones that try to make an accurate forecast about the future. What matters is having analysis that can reveal gaps in our understanding of a situation. ‘They are generally most useful when they identify impacts of policy decisions which are not predictable by commonsense,’ Whitty has suggested. ‘The key is usually not that they are “right”, but that they provide an unpredicted insight.’[84]

In 2012, police in chicago introduced the ‘Strategic Subjects List’ (SSL) to predict who might be involved in a shooting. The project was partly inspired by Andrew Papachristos’s work on social networks and gun violence in the city, although Papachristos has distanced himself from the SSL.[85] The list itself is based on an algorithm that calculates risk scores for certain city inhabitants. According to its developers, the SSL does not explicitly include factors like gender, race or location. For several years, though, it wasn’t clear what did go into it. After pressure from the Chicago Sun-Times, the Chicago Police Department finally released the SSL data in 2017. The dataset contained the information that went into the algorithm – like age, gang affiliations, and prior arrests – as well as the corresponding risk scores it produced. Researchers were positive about the move. ‘It’s incredibly rare – and valuable – to see the public release of the underlying data for a predictive policing system,’ noted Brianna Posadas, a fellow with the social justice organisation Upturn.[86]

There were around 400,000 people in the full SSL database, with almost 290,000 of them deemed high risk. Although the algorithm didn’t explicitly include race as an input, there was a noticeable difference between groups: over half of black twenty-something men in Chicago had an SSL score, compared with 6 per cent of white men. There were also a lot of people who had no clear link to violent crime, with around 90,000 ‘high-risk’ individuals having never been arrested or a victim of crime.[87]

This raises the question of what to do with such scores. Should police monitor people who don’t have any obvious connection to violence? Recall that Papachristos’s network studies in Chicago focused on victims of gun violence, not perpetrators; the aim of such analysis was to help save lives. ‘One of the inherent dangers of police-led initiatives is that, at some level, any such efforts will become offender-focused,’ Papachristos wrote in 2016. He argued that there is a role for data in crime prevention, but it doesn’t have to be solely a police matter. ‘The real promise of using data analytics to identify those at risk of gunshot victimization lies not with policing, but within a broader public health approach.’ He suggested that predicted victims could benefit from the support of people like social workers, psychologists, and violence interrupters.

Successful crime reduction can come in a variety of forms. In 1980, for example, West Germany made it mandatory

Вы читаете The Rules of Contagion
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату