Abstract

Developing network intrusion detection and prevention systems usually leverage a rule-based approach, which is derived from rules defined by network security experts who can utilize logic from both low and high network layers. However, in recent times, machine learning methods have also achieved promising results in developing Network Intrusion Detection Systems, and their popularity is steadily rising. Unfortunately, the usage of these machine learning methods in real-life problems has regularly proved that no good out-of-the-box solution exists for production or deployment. Also, due to the increasing volume and complexity of processed data that machine learning methods are faced with over time, improvements and adaptions are frequently required. As the problem at hand becomes more convoluted, so does the the nature of the applied solution. This complexity is further compounded by the fact that certain machine and deep learning methods intrinsically do not offer a way of understanding how they make decisions, effectively behaving like black boxes. All of this significantly lowers the understandability of implemented solutions in production environments that are already quite complex, which justifies the need of interpretability methods. While interpretability methods are commonly designed to be used by humans, in this paper we propose a way of improving a model's classification performance by applying data mining methods on explanation data generated by interpretability methods. The paper's main contribution is improving on a previously built network intrusion detection system through proposing an automated process of integrating explanations into original data with the purpose of improving the interpretability and score of the used machine learning model

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call