Abstract

Edge intelligence has played an important role in constructing smart cities, but the vulnerability of edge nodes to adversarial attacks becomes an urgent problem. A so-called adversarial example can fool a deep learning model on an edge node for misclassification. Due to the transferability property of adversarial examples, an adversary can easily fool a black-box model by a local substitute model. Edge nodes in general have limited resources, which cannot afford a complicated defense mechanism like that on a cloud data center. To address the challenge, we propose a dynamic defense mechanism, namely EI-MTD. The mechanism first obtains robust member models of small size through differential knowledge distillation from a complicated teacher model on a cloud data center. Then, a dynamic scheduling policy, which builds on a Bayesian Stackelberg game, is applied to the choice of a target model for service. This dynamic defense mechanism can prohibit the adversary from selecting an optimal substitute model for black-box attacks. We also conduct extensive experiments to evaluate the proposed mechanism, and results show that EI-MTD could protect edge intelligence effectively against adversarial attacks in black-box settings.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.