Abstract

Numerical likelihood-ratio (LR) systems aim to calculate evidential strength for forensic evidence evaluation. Calibration of such LR-systems is essential: one does not want to over- or understate the strength of the evidence. Metrics that measure calibration differ in sensitivity to errors in calibration of such systems. In this paper we compare four calibration metrics by a simulation study based on Gaussian Log LR-distributions. Three calibration metrics are taken from the literature (Good, 1985; Royall, 1997; Ramos and Gonzalez-Rodriguez, 2013) [1–3], and a fourth metric is proposed by us. We evaluated these metrics by two performance criteria: differentiation (between well- and ill-calibrated LR-systems) and stability (of the value of the metric for a variety of well-calibrated LR-systems). Two metrics from the literature (the expected values of LR and of 1/LR, and the rate of misleading evidence stronger than 2) do not behave as desired in many simulated conditions. The third one (Cllrcal) performs better, but our newly proposed method (which we coin devPAV) is shown to behave equally well to clearly better under almost all simulated conditions. On the basis of this work, we recommend to use both devPAV and Cllrcal to measure calibration of LR-systems, where the current results indicate that devPAV is the preferred metric. In the future external validity of this comparison study can be extended by simulating non-Gaussian LR-distributions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call