Abstract
Synthetic aperture radar (SAR) image classification is a challenging problem due to the complex imaging mechanism as well as the random speckle noise, which affects radar image interpretation. Recently, deep neural networks (DNNs) have been shown to outperform previous state-of-the-art techniques in computer vision tasks owing to their ability to learn relevant features from the data. However, the fragility of these models has received far less academic attention in the remote sensing community, which limits our understanding of the security of remote sensing image classification models. To explore the basic characteristic of adversarial examples of SAR images, we compare several mainstream adversarial methods and evaluate the securities of used DNNs from the perspective of attention. We subsequently perform other attempts. The experimental results provide data support and an effective reference for the defense capabilities of various DNNs regarding attack in SAR image classification models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.