Abstract

During routine casework, fingerprint examiners are required to make decisions pertaining to the sufficiency of friction ridge skin impressions. Prior experimental research has established that differences of opinion between examiners are expected, though it is uncertain if these findings are representative of the decisions made during casework. In this study, 5000 job-cards completed by fingerprint experts of the NSW Police Force were scrutinised to track the differences of opinion that occurred between examiners. Experts recorded 19,491 casework decisions, which resulted in 8964 reported identification and inconclusive determinations. Expert decision making was found to be unanimous in 94.8 % of these determinations; 4.6 % involved one expert-to-expert disagreement; and 0.5 % involved two expert-to-expert disagreements. Nil determinations featured more than two expert-to-expert disagreements. Expert-to-expert disagreements occurred in 3.7 % of all identification and inconclusive casework verification decisions. However, verifying experts were more likely to agree with a prior expert’s identification decision, than a prior expert’s inconclusive decision. The observed expert-to-expert identification disagreement rate was 2.0 %, whereas the observed expert-to-expert inconclusive disagreement rate was 12.5 %. Overall, most casework disagreements arose due to subjective differences concerning the suitability of friction ridge skin information for comparison or sufficiency for identification. Experts were more concordant in their decision-making with other experts than with trainees, and approximately three times more likely to disagree with a prior trainees’ identification or inconclusive decision than a prior experts’ identification or inconclusive decision. We assume these differences reflect trainees’ developing proficiencies in assessing the suitability or sufficiency of friction ridge skin impression information. Differences of opinion in casework are expected, which exposes the subjective nature of fingerprint decision-making. Computer-based quality metric and likelihood ratio tools should be considered for use in casework to guide examiner evaluations and mitigate examiner disagreements.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.