Post-crime, identifying firearm types from remnants of fired bullets is a daunting task for any ballistic expert. The features used for identification are few and almost obliterated due to impact. This creates a solid foundation for applying AI to the process, the first being segmentation and enhancement of the features. However, the bullet’s metal surface makes image capturing and analysis more complicated than other common domains. In the present study, an attempt is made to extract one of the defining features of fired bullets, namely striations, using deep learning techniques, which will assist in automated firearm identification. U-net, a CNN-based semantic segmentation architecture, and two variants (the Inception U-net and Residual U-net architecture) achieve the objectives. The U-net architecture achieved 88% accuracy with a training loss as low as 0.0231 after 700 epochs of training. The Inception U-net architecture and Residual U-net architecture achieved training accuracy of 88.30% and 88.79%, respectively, while their training loss reduced to as low as 0.0194 and 0.0151, respectively, with the same number of epochs. With 10 Fold Cross-Validation the accuracy of Residual U-net further enhanced to 89.70%. One key observation from the three models’ training curve is that the convergence is significantly faster in Residual U-net than Inception U-net architecture, which, in turn, is much faster than the U-net architecture. Supported with statistical analysis, the study establishes that deep learning techniques prove valuable to segment the striation marks from the bullet images and help the ballistic experts identify firearms.