Abstract

Abstract For several decades, legal and scientific scholars have argued that conclusions from forensic examinations should be supported by statistical data and reported within a probabilistic framework. Multiple models have been proposed to quantify and express the probative value of forensic evidence. Unfortunately, the use of statistics to perform inferences in forensic science adds a layer of complexity that most forensic scientists, court officers and lay individuals are not armed to handle. Many applications of statistics to forensic science rely on ad-hoc strategies and are not scientifically sound. The opacity of the technical jargon used to describe probabilistic models and their results, and the complexity of the techniques involved make it very difficult for the untrained user to separate the wheat from the chaff. This series of papers is intended to help forensic scientists and lawyers recognize limitations and issues in tools proposed to interpret the results of forensic examinations. This article focuses on tools that have been proposed to leverage the use of similarity scores to assess the probative value of forensic findings. We call this family of tools ‘score-based likelihood ratios’. In this article, we present the fundamental concepts on which these tools are built, we describe some specific members of this family of tools, and we compare them explore to the Bayes factor through an intuitive geometrical approach and through simulations. Finally, we discuss their validation and their potential usefulness as a decision-making tool in forensic science.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call