Abstract
Security mechanisms constitute a vital part towards the design of a computer network in modern-day organisations. In particular, the implementation of the principle of layered security to harden the network against attacks requires the introduction of checkpoints into the connectivity of components, which inevitably has an adverse impact on network performance. Moreover, advanced intrusion detection systems (IDSs) could be effectively utilised at the checkpoints of the computer network, leading to the analysis and determination of ‘optimal’ security versus performance trade-offs. To this end, a novel quantitative method is proposed for the evaluation and prediction of the aforementioned trade-offs supported by Machine Learning Algorithms (MLAs), such as Random Forest (RF) classifier, Logistic Regression (LR) and Naïve Bayes (NB) algorithms for Network Intrusion Detection Systems (NIDSs). In this context, a minimisation is employed in order to reduce the high dimensionality of datasets using Feature Selection (FS) for the dataset. Moreover, highly weighted features are used to keep false-negative (FN) low and increase the accuracy of MLAs towards the establishment of ‘optimal’ performance versus security tradeoffs. Typical numerical experiments are carried out indicating that the RF classifier is the best MLA, incorporating a subset of 19 selected features and identifying different types of attacks correctly with 99.9% of accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.