Abstract

Network traffic classification is an increasingly significant prerequisite for network management. An accurate traffic classifier can contribute to traffic engineering, traffic intrusion detection and user behavior analysis. Recently, deep learning (DL) has attracted considerable attention for traffic classification due to its excellent learning ability without the need for handcrafted feature engineering or privacy invasion. However, most DL-based traffic classification solutions mainly focus on improving the identification performance, with no regard for the computational and memory burden caused by a large model size or data redundancy. Motivated by the above gaps, we elaborate on the preprocessing of raw traffic data and investigate a lightweight traffic classifier. Specifically, we design a preprocessing approach to convert raw traffic data into available datasets for deep learning based traffic classifiers, which tailors raw traffic data as training datasets by resolving its structure and content and pruning redundant information. Thanks to its ability to focus on key information, attention mechanism is introduced to design an attention-based long short-term memory (LSTM) model for traffic classification, termed the ABL-TC. ABL-TC can effectively identify global dependencies between the input and the output with a limited number of important raw features, which can contribute to learning key information to identify various traffic types. Extensive experiments on real-world public datasets show that the ABL-TC outperforms state-of-the-art approaches in terms of various metrics for recognizing traffic categories while remaining competitive in terms of time and memory efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call