Abstract

Due to the security concerns arising from adversarial vulnerability in deep metric learning models, it is essential to enhance their adversarial robustness for secure neural network software development. Existing defense strategies utilize adversarial triplets to enhance adversarial robustness but sacrifice benign performance. This paper proposes a novel framework for enhancing adversarial robustness and maintaining benign performance by introducing the concept of Neural Discrete Adversarial Training (NDAT) for deep metric learning. NDAT employs VQGAN to transform the adversarial triplets into discrete inputs and then minimizes metric loss function on discrete adversarial triplets. NDAT aligns discrete adversarial examples more closely with clean samples, significantly reducing distribution deviation from their clean counterparts. Moreover, the visual explanations reveal that NDAT maintains consistent attention maps between benign and adversarial triplets and concentrates on structure details and object location perturbations. To demonstrate the effectiveness of our approach, we combine NDAT with popular adversarial methods under various perturbation iterations and intensities. Experiment evaluations on three benchmark databases illustrate that our proposed framework for deep metric learning significantly outperforms state-of-the-art defense approaches in terms of both adversarial robustness and benign performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.