Abstract

Traffic classification is one of the fundamental tasks in computer networking. This task aims to associate network traffic to a specific class according to the requirements (e.g., QoS provisioning). Online classification, which refers to the situations where flows need to be classified in real time, is an essential technique for this topic. In recent academic research, traffic classification methods based on machine learning (ML) or deep learning (DL) have been proposed. However, most of these methods take flow-level data as input, which requires observing the entire or large portion of a flow and violates the restrictions of online classification. Furthermore, the DL-based methods scarcely discuss the interpretability (e.g., which features are learned by DL, where is the discrimination power from). The lack of interpretability makes people question their reliability and may hinder their further applications. In this paper, we propose a self-attentive method (SAM) for traffic classification. We properly design a neural network whose input can be more fine-grained (i.e., packet-level). This neural network outputs classification results (∼2ms per packet) and consequently satisfies the requirements of online classification. Furthermore, a new technique called self-attention mechanism is utilized for interpretability exploration. By assigning attentive weights to different parts of the input, the self-attention mechanism allows us to understand how the DL model learns discriminative features from the input. According to experimental results, SAM outperforms the current state-of-the-art schemes, improving classification accuracy by ∼8% (protocol classification), ∼5% (application classification), and ∼13% (traffic type classification). The code is available at https://github.com/xgr19/SAM-for-Traffic-Classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call