Abstract
Deep convolutional neural networks (CNNs) are extensively applied to the classification tasks for Side-scan sonar (SSS) images. However, state-of-the-art neural networks are prone to be confused by adversarial attacks that generate a tiny modification of the images, threatening the security of SSS classification. The robustness of CNN to adversarial attacks can be improved by introducing adversarial examples through adversarial training. Practical adversarial examples are often generated from elaborate adversarial attackers. For the underwater scenario of sonar, a specially designed adversarial attack method to weaken SSS image classification can make the research community better understand the weakness of CNN in this scenario and improve the security measures in a well-directed way. Thus, exploring adversarial attack methods for SSS image classification is essential. Nevertheless, the existing adversarial attack methods are designed for optical images, reflecting no physical characteristics of sonar images. To fill this gap and investigate the adversarial attack related to real-world conditions, in this paper, we propose an adversarial attack method named Lambertian Adversarial Sonar Attack (LASA). It initially leverages the Lambertian model to simulate the formation of the SSS image, factoring the image to three parameters, then updates the parameters on the direction of gradients by the chain rule. Finally, the parameters regenerate the adversarial example to fool the classifier. To validate the performance of LASA, we constructed a diversified SSS image dataset containing three categories. On our dataset, LASA reduces the Top-1 accuracy of a well-trained ResNet-101 to 7.31%±0.21 (one-shot version) and 0.00% (iterative version), the success rate of targeted attack reaches 97.03±2.24, far beyond the performance of the existing state-of-the-art adversarial attack methods. Meanwhile, we show that the adversarial training using examples generated from LASA makes the classifier more robust. We expect that our methods can be applied as a benchmark of adversarial attacks on SSS images, motivating future research to design novel neural networks or defensive methods to resist real-world adversarial attacks on SSS images.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.