Abstract

Synthetic aperture radar (SAR) images for automatic target classification (automatic target recognition (ATR)) have attracted significant interest as they can be acquired day and night under a wide range of weather conditions. However, SAR images can be time consuming to analyse, even for experts. ATR can alleviate this burden and deep learning is an attractive solution. A new deep learning Pose-informed architecture solution, that takes into account the impact of target orientation on the SAR image as the scatterers configuration changes, is proposed. The classification is achieved in two stages. First, the orientation of the target is determined using a Hough transform and a convolutional neural network (CNN). Then, classification is achieved with a CNN specifically trained on targets with similar orientations to the target under test. The networks are trained with translation and SAR-specific data augmentation. The proposed Pose-informed deep network architecture was successfully tested on the Military Ground Target Dataset (MGTD) and the Moving and Stationary Target Acquisition and Recognition (MSTAR) datasets. Results show the proposed solution outperformed standard AlexNets on the MGTD, MSTAR extended operating condition (EOC)1, EOC2 and standard operating condition (SOC)10 datasets with a score of 99.13% on the MSTAR SOC10.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.