Abstract

K-Nearest Neighbor (kNN)-based deep learning methods have been applied to many applications due to their simplicity and geometric interpretability. However, the robustness of kNN-based deep classification models has not been thoroughly explored and kNN attack strategies are underdeveloped. In this paper, we first propose an Adversarial Soft kNN (ASK) loss for developing more effective kNN-based deep neural network attack strategies and designing better defense methods against them. Our ASK loss provides a differentiable surrogate of the expected kNN classification error. It is also interpretable as it preserves the mutual information between the perturbed input and the in-class-reference data. We use the ASK loss to design a novel attack method called the ASK-Attack (ASK-Atk), which shows superior attack efficiency and accuracy degradation relative to previous kNN attacks on hidden layers. We then derive an ASK-Defense (ASK-Def) method that optimizes the worst-case ASK training loss. Experiments on CIFAR-10 (ImageNet) show that (i) ASK-Atk achieves ≥ 13% (≥ 13%) improvement in attack success rate over previous kNN attacks, and (ii) ASK-Def outperforms the conventional adversarial training method by ≥ 6.9% (≥ 3.5%) in terms of robustness improvement. Relevant codes are available at https://github.com/wangren09/ASK.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call