In recent years, deep learning technology has shown astonishing potential in many fields, but at the same time, it also hides serious vulnerabilities. In the field of network traffic classification, attackers exploit this vulnerability to add designed perturbations to normal traffic, causing incorrect network traffic classification to implement adversarial attacks. The existing network traffic adversarial attack methods mainly target specific models or sample application scenarios, which have many problems such as poor transferability, high time cost, and low practicality. Therefore, this article proposes a method towards universal and transferable adversarial attacks against network traffic classification, which can not only perform universal adversarial attacks on all samples in the network traffic dataset, but also achieve cross data and cross model transferable adversarial attacks, that is, it has transferable attack effects at both the network traffic data and classification model levels. This method utilizes the geometric characteristics of the network model to design the target loss function and optimize the generation of universal perturbations, resulting in biased learning of features at each layer of the network model, leading to incorrect classification results. Meanwhile, this article conducted universality and transferability adversarial attack verification experiments on standard network traffic datasets of three different classification applications, USTC-TFC2016, ISCX2016, and CICIoT2023, as well as five common network models such as LeNet5. The results show that the proposed method performs universal adversarial attacks on five network models on three datasets, USTC-TFC2016, ISCX2016, and CICIoT2023, with an average attack success rate of over 80 %, 85 %, and 88 %, respectively, and an average time cost of about 0–0.3 ms; And the method proposed in this article has shown good transferable attack performance between five network models and on three network traffic datasets, with transferable attack rates approaching 100 % across different models and datasets, which is more closely related to practical applications.
Read full abstract