Abstract

In synthetic aperture radar (SAR) automatic target recognition (ATR), there are mainly two types of methods: physics-driven model and data-driven network. The physics-driven model can exploit electromagnetic theory to obtain physical properties, while the data-driven network will extract deep discriminant feature of targets. These two types of features represent the target characteristics in scattering domain and image domain, respectively. However, the representation discrepancy caused by the different modalities between them hinders the further comprehensive utilization and fusion of both features. In order to take full advantage of physical knowledge and deep discriminant feature for SAR ATR, we propose a new feature fusion learning framework SDF-Net to combine scattering and deep image features. In this work, we treat the attributed scattering centers (ASC) as set-data instead of multiple individual points, which can well mine the topological interaction among scatterers. Then multi-region multi-scale sub-sets are constructed at both component and target levels. To be specific, the most significant scattering intensity and overall representation in these sub-sets are exploited successively to learn permutation-invariant scattering features according to a set-oriented deep network. The scattering representations can provide mid-level semantic and structural features that are subsequently fused with the complementary deep image features to yield an end-to-end high-level feature learning framework, which helps enhance the generalization ability of networks especially under complex observation conditions. Extensive experiments on Moving and Stationary Target Acquisition and Recognition database verify the effectiveness and robustness of the SDF-Net compared against both typical SAR ATR networks and ASC-based models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call