Abstract

Deep learning achieves an outstanding success in the edge scene due to the appearance of lightweight neural network. However, a number of works show that these networks are vulnerable for adversarial examples, bringing security risks. The classical adversarial detection methods are used in white-box setting and show weak performances in black-box setting, like the edge scene. Inspired by the experimental results that different models give various predictions for the same adversarial example with a high probability, we propose a novel adversarial detection method called Ensemble-model Adversarial Detection Method (EADM). EADM defenses the prospective adversarial attack on edge devices by cloud monitoring, which deploys ensemble-model in the cloud and give the most possible label for each input copy received in the edge. The comparison experiment in the assumed edge scene with baseline methods demonstrates the effect of EADM, with a higher defense success rate and a lower false positive rate by an ensemble-model consisted of five pretrained models. The additional ablation experiment explores the influence of different model combinations and adversarial trained models. Besides, the possibility about transfering our method to other fields is discussed, showing the transferability of our method across domains.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call