Abstract

In recent years, machine learning-based intrusion detection systems (IDSs) have proven to be effective; especially, deep neural networks improve the detection rates of intrusion detection models. However, as models become more and more complex, people can hardly get the explanations behind their decisions. At the same time, most of the works about model interpretation focuses on other fields like computer vision, natural language processing, and biology. This leads to the fact that in practical use, cybersecurity experts can hardly optimize their decisions according to the judgments of the model. To solve these issues, a framework is proposed in this paper to give an explanation for IDSs. This framework uses SHapley Additive exPlanations (SHAP), and combines local and global explanations to improve the interpretation of IDSs. The local explanations give the reasons why the model makes certain decisions on the specific input. The global explanations give the important features extracted from IDSs, present the relationships between the feature values and different types of attacks. At the same time, the interpretations between two different classifiers, one-vs-all classifier and multiclass classifier, are compared. NSL-KDD dataset is used to test the feasibility of the framework. The framework proposed in this paper leads to improve the transparency of any IDS, and helps the cybersecurity staff have a better understanding of IDSs' judgments. Furthermore, the different interpretations between different kinds of classifiers can also help security experts better design the structures of the IDSs. More importantly, this work is unique in the intrusion detection field, presenting the first use of the SHAP method to give explanations for IDSs.

Highlights

  • With the enormous growth of cyber networks’ usage and the vast applications running on it, network security is becoming increasingly important

  • SHapley Additive exPlanations (SHAP) is a method that can do local and global interpretability at the same time, and it has a solid theoretical foundation compared to other methods

  • EXPERIMENTS AND RESULTS the experimental setup is discussed, including the dataset used in the experiment, the structure of the intrusion detection systems (IDSs), the way for training the intrusion detection models, and the performances of the models

Read more

Summary

Introduction

With the enormous growth of cyber networks’ usage and the vast applications running on it, network security is becoming increasingly important. It is estimated that there will be a trillion physical devices connected to the Internet until 2022 [4]. These new technological developments have raised some security and privacy concerns. Model interpretability can be divided into two categories: global interpretability and local interpretability [45]. Global interpretability means the users can understand the model directly from its overall structure. SHAP is a method that can do local and global interpretability at the same time, and it has a solid theoretical foundation compared to other methods. SHAP connects LIME [32] and Shapley Values [33].

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.