Abstract

Criminal law’s efficient and accurate administration depends to a considerable extent on the ability of decision-makers to identify unique individuals, circumstances and events as instances of abstract terms (such as events raising ‘reasonable suspicion’) laid out in the legal framework. Automated Facial Recognition has the potential to revolutionise the identification process, facilitate crime detection, and eliminate misidentification of suspects. This paper reviews the recent decision regarding the deployment of AFR by South Wales Police. We conclude that the judgment does not give the green light to other fact sensitive deployments of AFR. We consider two of these: a) use of AFR as a trigger for intervention short of arrest; b) use of AFR in an evidential context in criminal proceedings. AFR may on the face of it appear objective and sufficient, but this is belied by the probabilistic nature of the output, and the building of certain values into the tool, raising questions as to the justifiability of regarding the tool’s output as an ‘objective’ ground for reasonable suspicion. The means by which the identification took place must be disclosed to the defence, if Article 6 right to a fair trial is to be upheld, together with information regarding disregarded ‘matches’ and error rates and uncertainties of the system itself. Furthermore, AFR raises the risk that scientific or algorithmic findings could usurp the role of the legitimate decision-maker, necessitating the development of a framework to protect the position of the human with decision-making prerogative.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call