Abstract

Although the field of eXplainable Artificial Intelligence (XAI) has a significant interest these days, its implementation within cyber security applications still needs further investigation to understand its effectiveness in discovering attack surfaces and vectors. In cyber defence, especially anomaly-based Intrusion Detection Systems (IDS), the emerging applications of machine/deep learning models require the interpretation of the models' architecture and the explanation of models' prediction to examine how cyberattacks would occur. This paper proposes a novel explainable intrusion detection framework in the Internet of Things (IoT) networks. We have developed an IDS using a Short-Term Long Memory (LSTM) model to identify cyberattacks and explain the model's decisions. This uses a novel set of input features extracted by a novel SPIP (S: Shapley Additive exPlanations, P: Permutation Feature Importance, I: Individual Conditional Expectation, P: Partial Dependence Plot) framework to train and evaluate the LSTM model. The framework was validated using the NSL-KDD, UNSW-NB15 and TON_IoT datasets. The SPIP framework achieved high detection accuracy, processing time, and high interpretability of data features and model outputs compared with other peer techniques. The proposed framework has the potential to assist administrators and decision-makers in understanding complex attack behaviour.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call