Abstract

Many previous studies have investigated applying artificial intelligence (AI) to cyber security. Despite considerable performance advantages, AI for cyber security requires final confirmation by an analyst, e.g. malware misdetection can cause significant adverse side effects. Thus, a human analyst must check all AI predictions, which poses a major obstacle to AI expansion. This paper proposes a reliability indicator for AI prediction using explainable artificial intelligence and statistical analysis techniques. This will enable analysts with limited daily workload to focus upon valuable data, and quickly verify AI predictions. Analysts generally make decisions based on several features that they know exactly what they mean, rather than all available features. Since the proposed reliability indicator is calculated using features meaningful to analysts, it can be easily understood and hence speed final decisions. To verify the performance of the proposed method, an experiment was conducted using the IDS dataset and the malware dataset. The AI error was detected better than the existing AI model at about 114% in IDS and 95% in malware. Thus, cyberattack response could be greatly improved by adopting the proposed method.

Highlights

  • With the rapid development of computer technology, new technologies appear often

  • In order to detect errors in the Artificial Intelligence (AI) model through the reliability indicator, the FOS of the test data is derived using the range processed through the training data, the calculated Shapley additive explanation (SHAP) mean, and the standard deviation

  • This indicates that a larger feature value has a positive effect on the AI model's malignant prediction, and a smaller feature value has a negative impact on the AI model's malicious prediction

Read more

Summary

Introduction

With the rapid development of computer technology, new technologies appear often. Cyberattack technology is evolving at an unprecedented rate. According to the McAfee Labs Threats Report [1] released in November 2020, the number of cyber threats detected in 2020 increased by 12 % in one year To address such concerns, various studies have been conducted for introducing machine learning and Artificial Intelligence (AI) technologies to detect cyberattacks in real security environments. Human analysts must directly intervene and respond to threats since accurate analysis is required for security environments This is inefficient and it is difficult for analysts to respond to a lot number of threats. To meet this challenge, eXplainable Artificial Intelligence (XAI) studies have been conducted to provide explainability on AI predictions. No general method for automatic valuable data detection has been devised to analyze large datasets in real environments

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.