Abstract

Anomaly detection is a problem with roots dating back over 30 years. The NSL-KDD dataset has become the convention for testing and comparing new or improved models in this domain. In the field of network intrusion detection, the UNSW-NB15 dataset has recently gained significant attention over the NSL-KDD because it contains more modern attacks. In the present paper, we outline two cutting-edge architectures that push the boundaries of model accuracy for these datasets, both framed in the context of anomaly detection and intrusion classification. We summarize training methodologies, hyperparameters, regularization, and other aspects of model architecture. Moreover, we also utilize the standard deviation of weight values to design a new regularization technique. Then, we embed it on both models and report the models’ performance. Finally, we detail potential improvements aimed at increasing models’ accuracy.

Highlights

  • Since we are operating in the domain of anomaly detection, we train the model on normal data only. e model predicts whether or not an arbitrary input fits the learned representation. e autoencoder is trained on the entire feature set. e encoder and decoder are composed of four layers with an encoding dimension of 256 units. e number of units is halved at each subsequent layer in the encoder, with the inverse being true for the decoder

  • Each model is trained on both NSL-KDD and UNSW-NB15 datasets by using train-test split method and explicitly tested on the test set provided by the dataset KDDTest+ and UNSW _NB15_testing − set (75% for training and 25% for validation)

  • We have shown the results in terms of accuracy, false positive rate (FPR), Model Proposed FFN Proposed Variational autoencoder (VAE) Proposed FFN Proposed VAE

Read more

Summary

Introduction

Suppose that we have three different functions f1, f2, and f3, and we let f (x) be the composite of all these functions, denoted as f (x) f3 (f2 (f1 (x))). This composition describes the structure of neural networks. The aforementioned composition can be described with f1 being the first layer, f2 being the second layer, and so on. E number of functions in this composition is the depth of the neural network model. E final function, or the most outer function, is known as the output layer in neural network terminology The aforementioned composition can be described with f1 being the first layer, f2 being the second layer, and so on. e number of functions in this composition is the depth of the neural network model. e final function, or the most outer function, is known as the output layer in neural network terminology

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.