Abstract

Machine learning (ML) techniques are increasingly important in cybersecurity, as they can quickly analyse and identify different types of threats from millions of events. In spite of the increasing number of possible applications of ML, successful adoption of ML models in cybersecurity still highly relies on the explainability of those models that are used for making predictions. Explanations that support ML model outputs are crucial in cybersecurity-oriented ML applications because people need to get more information from the model than just binary output for analysis. The explainable models help ML developers solve the “trust” problem for a security application prediction in a faithful way: validating model behaviours, diagnosing misclassifications and sometimes automatically patching errors in the target models. Therefore, explainable ML for cybersecurity has become a necessary and important research branch. In this paper, we present the topic of explainable ML in cybersecurity through two general types of explanations: (1) ante hoc explanation, and (2) post hoc explanation, with their methodologies. We systematically review and categorise the state-of-the-art research, and provide comparative studies to help researchers find the optimal solutions to specific problems. We further list open issues in this field to facilitate future studies. This survey will benefit diverse groups of readers from both academia and industries, who want to effectively use ML to solve cybersecurity challenges.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call