Abstract
Forensic latent print examiners usually but do not always reproduce each other’s conclusions. Using data from tests of experts conducting fingerprint comparisons, we show the extent to which differing conclusions can be explained in terms of the images, and in terms of the examiners. Some images are particularly prone to disagreements or erroneous conclusions; the highest and lowest quality images generally result in unanimous conclusions. The variability among examiners can be seen as the effect of implicit individual decision thresholds, which we demonstrate are measurable and differ substantially among examiners; this variation may reflect differences in skill, risk tolerance, or bias. Much of the remaining variability relates to inconsistency of the examiners themselves: borderline conclusions (i.e., close to individual decision thresholds) often were not repeated by the examiners themselves, and tended to be completed more slowly and rated difficult. A few examiners have significantly higher error rates than most: aggregate error rates of many examiners are not necessarily representative of individual examiners. The use of a three-level conclusion scale does not precisely represent the underlying agreements and disagreements among examiners. We propose a new method of quantifying examiner skill that would be appropriate for use in proficiency tests. These findings are operationally relevant to staffing, quality assurance, and disagreements among experts in court.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.