Abstract

Adversarial training is an effective method to enhance adversarial robustness for deep neural networks. However, it qequires large amounts of labeled data, which are often difficult to acquire. Recent research has shown that self-supervised learning can help to improve model performance and model uncertainty using unlabeled data. In this paper, we introduce a new adversarial self-supervised learning framework to learn a robust pretrained model for remote sensing scene classification. The proposed method exploits the advantage of dual network structure, and it requires neither labeled data for adversarial example generation nor negative samples for contrastive learning. Specifically, it consists of three major steps. Firstly, we train the online model and the target model to extract deep image features. Secondly, we generate two kinds of instance-wise adversarial examples. Finally, we iteratively learn a robust model by implicit comparing the difference between clean data and their perturbed counterpart. Preliminary experimental results on remote sensing scene classification dataset shows that our method can obtain higher robust accuracy. Our method can also be combined with other adversarial defense techniques to further promote model robustness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.