Abstract

Post-crime, identifying firearm types from remnants of fired bullets is a daunting task for any ballistic expert. The features used for identification are few and almost obliterated due to impact. This creates a solid foundation for applying AI to the process, the first being segmentation and enhancement of the features. However, the bullet’s metal surface makes image capturing and analysis more complicated than other common domains. In the present study, an attempt is made to extract one of the defining features of fired bullets, namely striations, using deep learning techniques, which will assist in automated firearm identification. U-net, a CNN-based semantic segmentation architecture, and two variants (the Inception U-net and Residual U-net architecture) achieve the objectives. The U-net architecture achieved 88% accuracy with a training loss as low as 0.0231 after 700 epochs of training. The Inception U-net architecture and Residual U-net architecture achieved training accuracy of 88.30% and 88.79%, respectively, while their training loss reduced to as low as 0.0194 and 0.0151, respectively, with the same number of epochs. With 10 Fold Cross-Validation the accuracy of Residual U-net further enhanced to 89.70%. One key observation from the three models’ training curve is that the convergence is significantly faster in Residual U-net than Inception U-net architecture, which, in turn, is much faster than the U-net architecture. Supported with statistical analysis, the study establishes that deep learning techniques prove valuable to segment the striation marks from the bullet images and help the ballistic experts identify firearms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.