Abstract

Deep learning models can produce unstable results by introducing imperceptible perturbations that are difficult for humans to recognize. This can have a significant impact on the accuracy and security of deep learning applications due to their poorly understood interpretability. As a field critical to security research, this problem clearly exists in underwater acoustic target recognition for ocean sensing. To address this issue, this article investigates the reliability of state-of-the-art deep learning models by exploring adversarial attack methods that add small, exquisite perturbations on acoustic Mel-spectrograms to generate adversarial spectrograms. Experimental results based on real-world datasets reveal that these models can be forced to learn unexpected features when subjected to adversarial spectrograms, resulting in significant accuracy drops. Specifically, when employing the iterative attack method, the overall accuracy of all models experiences a significant decrease of approximately 70% for two datasets under stronger perturbations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call