Abstract

In intrusion detection, the curse of dimensionality and the trade-off between maintaining a low false alarm rate and achieving a high detection rate are significant challenges. This research suggests a unique strategy based on dimensionality reduction methods to improve the performance of network intrusion detection systems (NIDS). Compressing high-dimensional network traffic data using a Long Short-Term Memory Autoencoder (LSTMAE) allows the reduced characteristics to be submitted to a classifier to identify anomalies that may indicate an attack. Using standard datasets, including Network Security Laboratory - Knowledge Discovery in Datasets (NSL-KDD), UNSW-NB15, and Canadian Institute for Cyber Security - Intrusion Detection Systems (CICIDS2017), the proposed model is tested with classifiers like Random Forest (RF) and LightGBM (Light Gradient Boosting Machines). It is hoped that by adopting this method, NIDS response times may be improved while costs associated with storing and processing data are minimized. Precision, recall, F-score, accuracy, detection rate (DR), and false alarm rate (FAR) are only a few of the performance measures used to assess the quality of the suggested models. The experimental findings show that the proposed LSTMAE model reduces prediction errors more effectively than classic machine learning techniques such as Random Forest (RF), Gradient Boosting (GB), Support Vector Machines (SVM), Deep Belief Networks (DBN), Deep Neural Networks (DNN), Autoencoder (AE), and Long Short-Term Memory (LSTM). The results also show that the proposed solution outperforms the state-of-the-art methods of detection accuracy and computing complexity using accuracy, precision, recall, F1_Score, detection rate, and FAR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call