Deep learning algorithms have found extensive utility in Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR). Despite this, conventional deep learning-based ATR models often lack in their performance when subjected to Extended Operating Conditions (EOCs). Significant variations in depression, noise interference, limited training data, and partial occlusion are primary factors in EOCs. In this paper, we proposed a deep neural network-based technique to recognize SAR targets for Standard Operating Conditions (SOC) and Extended Operating Conditions (EOC). The proposed approach comprises an encoder and decoder network. The encoder network integrates multiple dilated convolutions to address noise-induced recognition challenges for extracting multi-scale features. Furthermore, a feature refinement module is integrated within the multi-scale channels to enhance the extraction of discriminative and resilient features. Feature refinement selectively emphasizes informative features while suppressing irrelevant ones. Additionally, a salient attention block and a spatial feature learning module are introduced for feature selection. Selected features are then utilized in the top-level network layer of the encoder network to preserve spatial relationships among diverse features. This spatial feature learning module aids in recognition performance, particularly when training data is scarce. The decoder network consists of stacked transposed convolution layers, facilitating the encoder network's acquisition of discriminative image features. Experimental evaluations were conducted on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset to validate the efficacy of the proposed methodology.
Read full abstract