Abstract

ABSTRACT Police departments around the world implement algorithmic systems to enhance various policing tasks. Ensuring such innovations take place responsibly – with public values upheld – is essential for public organisations. This paper analyses how public values are safeguarded in the case of MONOcam, an algorithmic camera system designed and used by the Netherlands police. The system employs artificial intelligence to detect whether car drivers are holding a mobile device. MONOcam can be considered a good example of value-sensitive design; many measures were taken to safeguard public values in this algorithmic system. In pursuit of responsible implementation of algorithms, most calls and literature focus on such value-sensitive design. Less attention is paid to what happens beyond design. Building on 120+ hours of ethnographic observations as well as informal conversations and three semi-structured interviews, this research shows that public values deemed safeguarded in design are re-negotiated as the system is implemented and used in practice. These findings led to direct impact, as MONOcam was improved in response. This paper thus highlights that algorithmic system design is often based on an ideal world, but it is in the complexities and fuzzy realities of everyday professional routines and sociomaterial reality that these systems are enacted, and public values are renegotiated in the use of algorithms. While value-sensitive design is important, this paper shows that it offers no guarantees for safeguarding public values in practice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call