Abstract
Anomaly classification based on network traffic features is an important task to monitor and detect network intrusion attacks. Network-based intrusion detection systems (NIDSs) using machine learning (ML) methods are effective tools for protecting network infrastructures and services from unpredictable and unseen attacks. Among several ML methods, random forest (RF) is a robust method that can be used in ML-based network intrusion detection solutions. However, the minimum number of instances for each split and the number of trees in the forest are two key parameters of RF that can affect classification accuracy. Therefore, optimal parameter selection is a real problem in RF-based anomaly classification of intrusion detection systems. In this paper, we propose to use the genetic algorithm (GA) for selecting the appropriate values of these two parameters, optimizing the RF classifier and improving the classification accuracy of normal and abnormal network traffics. To validate the proposed GA-based RF model, a number of experiments is conducted on two public datasets and evaluated using a set of performance evaluation measures. In these experiments, the accuracy result is compared with the accuracies of baseline ML classifiers in the recent works. Experimental results reveal that the proposed model can avert the uncertainty in selection the values of RF’s parameters, improving the accuracy of anomaly classification in NIDSs without incurring excessive time.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.