Abstract

In recent years, deep learning-based approaches, particularly those leveraging the Transformer architecture, have garnered widespread attention for network traffic anomaly detection. However, when dealing with noisy data sets, directly inputting network traffic sequences into Transformer networks often significantly degrades detection performance due to interference and noise across dimensions. In this paper, we propose a novel multi-channel network traffic anomaly detection model, MTC-Net, which reduces computational complexity and enhances the model's ability to capture long-distance dependencies. This is achieved by decomposing network traffic sequences into multiple unidimensional time sequences and introducing a patch-based strategy that enables each sub-sequence to retain local semantic information. A backbone network combining Transformer and CNN is employed to capture complex patterns, with information from all channels being fused at the final classification header in order to achieve modelling and detection of complex network traffic patterns. The experimental results demonstrate that MTC-Net outperforms existing state-of-the-art methods in several evaluation metrics, including accuracy, precision, recall, and F1 score, on four publicly available data sets: KDD Cup 99, NSL-KDD, UNSW-NB15, and CIC-IDS2017.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.