Abstract

Network intrusion detection plays a very important role in network security. Although current deep learning-based intrusion detection algorithms have achieved good detection performance, there are still limitations in dealing with unbalanced datasets and identifying minority attacks and unknown attacks. In this paper, we propose an intrusion detection model AE-SAC based on adversarial environment learning and soft actor-critic reinforcement learning algorithm. First, this paper introduces an environmental agent for training data resampling to solve the imbalance problem of the original data. Second, rewards are redefined in reinforcement learning. In order to improve the recognition rate of few categories of network attacks, we set different reward values for different categories of attacks. The environment agent and classifier agent are trained adversarially around maximizing their respective reward values. Finally, a multi-classification experiment is conducted on the NSL-KDD and AWID datasets to compare with the existed excellent intrusion detection algorithms. AE-SAC achieves excellent classification performance with an accuracy of 84.15% and a f1-score of 83.97% on the NSL-KDD dataset, and an accuracy and a f1-score over 98.9% on the AWID dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call