Law enforcement agencies increasingly use predictive and automation technologies where the core technology is often a machine learning (ML) model. The article considers the problem of accountability and responsibility of law enforcement agencies and officials connected with using of ML models. The authors point out that accountability is a key element of democratic law enforcement, but using of the predictive software can create challenges in ensuring that accountability. The article discusses how the application of ML can lead to obfuscation of responsibility and complicating accountability in «multi-agent structures» that combine humans and computational tools. Special attention is paid to the opacity of predictive algorithms and automated decision-making systems. It becomes a source of misunderstandings and caution regarding their use. The authors raise questions about how effective oversight and full reporting can be ensured when key components of the decision-making systems remain unknown to the general public, officials, and even developers of the models. The paper argues that important questions related to ML decision models can be solved without detailed knowledge of the machine learning algorithms, allowing non-ML law enforcement experts to study them in a form of intelligent control. Non-ML experts can and should review trained ML models. The authors provide a «toolkit» in the form of questions about three elements of the ML-based decision models that can be qualitatively explored by non-machine learning experts: training data, training goal, and anticipatory outcome evaluation. This approach expands the capabilities of these experts in the form of an objective assessment of the use of ML models in law enforcement tasks. This will allow them to evaluate effectiveness of the models through the prism of their own experience. The basic idea is that even without deep technical knowledge, law enforcement experts can analyze and review ML models. This approach promotes understanding of the use of machine learning technologies in law enforcement, expanding the potential of non-ML law enforcement experts.
Read full abstract