Abstract

In this paper, we investigate the impact of pattern leakage during data preprocessing on the reliability of Machine Learning (ML) based intrusion detection systems (IDS). Data leakage, also known as pattern leakage, occurs during data preprocessing when information from the testing set is used in training, leading to overfitting and inflated accuracy scores. Our study uses three well-known intrusion detection datasets: NSL-KDD, UNSW-NB15, and KDDCUP99. We preprocess the data to create versions with and without pattern leakage and train and test six ML models: Decision Tree (DT), Gradient Boosting (GB), K-neighbours (KNN), Support Vector Machine (SVM), Random Forest (RF), Logistic Regression (LR). Our results show that building IDS models with data leakage leads to higher accuracy but is unreliable. Additionally, we find that some algorithms are more sensitive to data leakage than others, as seen by the drop in model accuracy when built without leakage. To address this problem, we provide suggestions for mitigating data leakage in the training process and analyzing the sensitivity of different algorithms. Overall, our study emphasizes the importance of addressing data leakage in the training process to ensure the reliability of ML-based IDS models.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.