Abstract
This paper reviews the crime linkage literature to identify how data were pre-processed for analysis, methods used to predict linkage status/series membership, and methods used to assess the accuracy of linkage predictions. Thirteen databases were searched, with 77 papers meeting the inclusion/exclusion criteria. Methods used to pre-process data were human judgement, similarity metrics (including machine learning approaches), spatial and temporal measures, and Mokken Scaling. Jaccard's coefficient and other measures of similarity (e.g., temporal proximity, inter-crime distance, similarity vectors) are the most common ways of pre-processing data. Methods for predicting linkage status were varied and included human (expert) judgement, logistic regression, multi-dimensional scaling, discriminant function analysis, principal component analysis and multiple correspondence analysis, Bayesian methods, fuzzy logic, and iterative classification trees. A common method used to assess linkage-prediction accuracy was to calculate the hit rate, although position on a ranked list was also used, and receiver operating characteristic (ROC) analysis has emerged as a popular method of assessing accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.