Abstract

With the rapid development of the information technology, the number of devices in the Internet of Things (IoT) is increasing explosively, which makes device identification a great challenge. Deep neural networks (DNNs) have been used for device identification in IoT due to their superior learning ability. However, DNNs are susceptible to adversarial attacks, which can greatly degrade the accuracy of deep learning (DL) models for device identification. The adversarial attack is one of the fundamental security concerns for DNNs, and it is of great importance to study the generation of adversarial examples and to examine the attack effects for the design of robust DNN-based device identification schemes. In this article, we examine the effects of nontargeted and targeted adversarial attacks on convolutional neural network (CNN)-based device identification and propose combined evaluation indicators of logits to enrich the evaluation criteria. Our experimental results demonstrate that the identification accuracy degrades with the increase of the perturbation level and iteration step size, and the proposed combined evaluation indicators are effective to show the individual device signal differences. The insights from this study will be useful for the design of robust DL-based IoT systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.