For web security, it’s essential to accurately classify traffic across various web applications to detect malicious activities lurking within network traffic. However, the encryption protocols for privacy protection, such as TLS 1.3 and IPSec, make it difficult to apply traditional traffic classification methods like deep packet inspection (DPI). Recently, the advent of deep learning has significantly advanced the field of encrypted traffic analysis (ETA), outperforming traditional traffic analysis approaches. Notably, pre-trained deep learning based ETA models have demonstrated superior analytical capabilities. However, the security aspects of these deep learning models are often overlooked during the design and development process. In this paper, we conducted adversarial attacks to evaluate the security of pre-trained ETA models. We targeted ET-BERT, a state-of-the-art model demonstrating superior performance, to generate adversarial traffic examples. To carry out the adversarial example generation, we drew inspiration from adversarial attacks on discrete data, such as natural language, defining fluency from a network traffic perspective and proposing a new attack algorithm that can preserve this fluency. Finally, in our experiments, we showed our target model is vulnerable to the proposed adversarial attacks.
Read full abstract