Abstract

To generate forensic intelligence from footwear impressions and link crime scenes, most law enforcement agencies and forensic laboratories rely on a manual codification system based on pattern recognition and classification by human analysts. However, although they are commonly used in practice, to date we still know little about the reliability of such systems. Taking advantage of the development of a footwear database for forensic intelligence purposes at the Laboratoire de sciences judiciaires et de médecine légale in Quebec (Canada), this study aims to make a preliminary assessment of the intra- and inter-rater reliability (i.e., the level of repeatability over time and the level of consensus between analysts) of the proposed codification system. To do so, three forensic intelligence analysts classified a set of 27 crime scene impressions and test impressions at two different times (two weeks apart). Percent agreement, Cohen’s Kappa, and Light’s Kappa were then calculated. Results show that two out of three analysts have reached an almost perfect level of intra-rater agreement, while the other have achieved a substantial level of intra-agreement, and that all analysts have reached a substantial level of inter-rater agreement. Findings suggest that, although a few patterns may have lower levels of agreement, overall, the developed codification system presents a satisfactory level of reliability. This preliminary study thus suggests that contrary to what advocates of fully automated systems may sometimes imply, manual codification of footwear impressions may be fairly appropriate for intelligence purposes. It calls for further evaluative research in the field.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call