Abstract

To address the problems of high reconstruction error and long training time when using Stack Nonsymmetric Deep Autoencoder (SNDAE) feature extraction technology for intrusion detection, Adam Nonsymmetric Deep Autoencoder (ANDAE) is proposed based on SNDAE. The Adam optimization algorithm is used to update network parameters during training so that the loss function can quickly converge to the ideal value. Under the premise of not affecting the effect of feature extraction, the network structure is simplified, and the training time of the network is reduced to realize the efficient extraction of the rapid growth of high-dimension and nonlinear network traffic features. For the low-dimensional prominent features extracted by ANDAE, Random Forest is used for classification to detect intrusion action, and a network intrusion detection model based on ANDAE feature extraction is implemented. The experimental results on the NSL-KDD and the CIC-IDS2017 datasets show that, compared to the SNDAE-based intrusion detection model, the ANDAE model has an average increase of 6.78% in accuracy, an average of 13.06% in recall, and an average of 14.9% in F1 scores. Feature extraction time is reduced by 23.1% on average. Thus, the ANDAE model is an intrusion detection solution, which can simultaneously improve detection accuracy and time efficiency.

Highlights

  • To address the above problems, many studies have adopted methods that combine deep learning and traditional machine learning. at is, the deep neural network is used to unsupervised extract the prominent features of the data distribution, reduce the high-dimensional data to low-dimensional data, and use traditional machine learning methods to build a classification model for intrusion detection

  • Python is chosen as the programming language, and Adam Nonsymmetric Deep Autoencoder (ANDAE) is implemented with Google’s deep learning framework TensorFlow

  • The number of neurons in the two hidden layers of the ANDAE network is 14 and 24, respectively. e parameter values of Random Forest and learning rate are the same as those set on the NSL-KDD dataset

Read more

Summary

Materials and Methods

Its purpose is to generate x􏽢 as similar as possible to the input data x It is often used for data dimensionality reduction and feature extraction. En, the decoder d(x) reconstructs the low-dimensional data of the hidden layer. In the process of encoding, due to the decrease of dimensionality, AE will learn as much as possible the features that can more significantly represent the data distribution so that the decoder can more accurately use these new learned features to reconstruct. E Deep Autoencoder (DAE) consists of two symmetric deep neural networks that are used for encoding and decoding, respectively. E work of Hinton and Salakhutdinov [22] fully proved that the use of DAE for feature extraction can make data better discrimination after dimensionality reduction, so that data that originally cannot be separated can be separated. When a certain layer of network is trained, the parameters of all previous layers are frozen

Hidden Layer
Normal TN FN
Normal Portscan
SNDAE ANDAE
Number of features selected
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.