Abstract
In modern surveillance, automatic target recognition (ATR) is a critical challenge, necessitating rapid and precise object identification, especially in military and disaster response scenarios. This research presents a comprehensive framework for ground target classification, focusing on Synthetic Aperture Radar (SAR) imagery. Harnessing the Moving and Stationary Target Acquisition and Recognition (MSTAR) Mixed Targets dataset, this study integrates Convolutional Neural Networks (CNNs) into SAR image analysis. Transfer learning emerges as a key strategy, adapting insights from pre-trained CNNs to the intricacies of target recognition in SAR imagery. This approach outperforms traditional methods, enhancing efficiency amid dataset annotation challenges. Noteworthy CNN architectures including DarkNet-19, DenseNet-121, InceptionV4, ResNet-152, and VGG19 are explored. The ResNet-152 model demonstrates exceptional performance, emerging as a leading contender with a remarkable testing accuracy of 98.56% when trained from scratch. Furthermore, by employing transfer learning, the model’s accuracy may be further enhanced to reach 98.81% on the testing dataset. This research signifies a transformative SAR imagery path guided by pre-trained CNNs and transfer learning. It reshapes ATR’s accuracy and efficiency, pointing to a future where innovation surmounts challenges previously deemed insurmountable.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.