Abstract

Network intrusion detection (NID) is an important cyber security scheme to identify attacks in network traffic. Recent years, a large amount of studies try to improve the accuracy of the NID by kinds of deep learning approaches. However, these models always require a lot of calculation and space, which constitutes a major hurdle to practical implementation of DL-based models. Thus, lightweight model is imperative, but there are very few applications of DL-based lightweight algorithms in NID models. In this paper, we propose a lightweight knowledge distillation (LKD) model for NID using the idea of knowledge distillation and separable convolution. To the best of our knowledge, it is the first system to use the knowledge distillation approach for NID. The experiment results show that the accuracy of the proposed approach reaches 91.46% and 94.30% on the KDD-CUP99 and UNSW-NB15 datasets respectively. The performance of our model is superior to some approaches based on deep neural network or some machine learning methods. Moreover, both the computational cost and model size of our model are reduced by about 99% compared to the original model.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.