Abstract

The active recognition of interesting targets has been a vital issue for synthetic aperture radar (SAR) systems. The SAR recognition methods are mainly grouped as follows: extracting image features from the target amplitude image or matching the testing samples with the template ones according to the scattering centers extracted from the target complex data. For amplitude image-based methods, convolutional neural networks (CNNs) achieve nearly the highest accuracy for images acquired under standard operating conditions (SOCs), while scattering center feature-based methods achieve steady performance for images acquired under extended operating conditions (EOCs). To achieve target recognition with good performance under both SOCs and EOCs, a feature fusion framework (FEC) based on scattering center features and deep CNN features is proposed for the first time. For the scattering center features, we first extract the attributed scattering centers (ASCs) from the input SAR complex data, then we construct a bag of visual words from these scattering centers, and finally, we transform the extracted parameter sets into feature vectors with the k-means. For the CNN, we propose a modified VGGNet, which can not only extract powerful features from amplitude images but also achieve state-of-the-art recognition accuracy. For the feature fusion, discrimination correlation analysis (DCA) is introduced to the FEC framework, which not only maximizes the correlation between the CNN and ASCs but also decorrelates the features belonging to different categories within each feature set. Experiments on Moving and Stationary Target Acquisition and Recognition (MSTAR) database demonstrate that the proposed FEC achieves superior effectiveness and robustness under both SOCs and EOCs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call