Abstract

There is an increasing use of algorithmic or machine learning-based intelligence analysis in the UK policing context. Two of the most high-profile types of intelligence retention and analysis practices used by the Metropolitan Police have recently been found to be unlawful. Notably, these were i) the indefinite retention of a peaceable individual’s records on a specialist domestic extremism database, and ii) the overly-lengthy retention of disproportionately BAME citizens in London on a ‘Gangs Matrix’. These two findings, from the European Court of Human Rights and the UK Information Commissioner’s Office, respectively, have been indications that forces that would heed the call of Her Majesty’s Chief Inspector of Constabulary in 2018 to devote more resources toward investment in ‘AI’ for policing purposes must do so carefully. Indeed, the new National Data Analytics Solution (NDAS) project, based within West Midlands Police, has recently been the subject of critical ethical scrutiny on a number of fronts. The West Midlands force has had its own offering of a data-driven ‘Integrated Offender Management’ tool delayed by the demands for more clarity from a bespoke ethics committee. This has possibly headed off a later finding of unlawfulness in the courts, as there could possibly have been a challenge by way of judicial review on administrative law principles as well as data protection law and human rights and equality law. As a result, this chapter seeks to draw out lessons for policymakers from these early skirmishes in the field of ‘predictive policing’. This piece also concludes with some observations about the need for a set of minimum standards of transparency in a statutory authorization process for algorithmic police intelligence analysis tools (APIATs), in a mooted Predictive Policing (Technology) Bill.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call