Abstract

AbstractIn recent years, machine learning techniques have been utilized in sensitive areas such as health, medical diagnosis, facial recognition, cybersecurity, etc. With this exponential growth comes potential large-scale ethical, safety, and social ramifications. With this enhanced ubiquity and sensitivity, concerns about ethics, trust, transparency, and accountability inevitably arise. Given the threat of sophisticated cyberattacks, it’s critical to establish cybersecurity trustworthy concepts and to develop methodologies and concepts for a wide range of explainable machine cybersecurity models that will assure reliable threat identification and detection, more research is needed. This survey examines a variety of explainable machine learning techniques that can be used to implement a reliable cybersecurity infrastructure in the cybersecurity domain. The main aim of this study is to execute an in-depth review and identification of existing explainable machine learning algorithms for cyberattack detection. This study employed the seven-step survey model to determine the research domain, implement search queries, and compile all retrieved articles from digital databases. This research looks at the literature on trustworthy machine learning algorithms for detecting cyberattacks. An extensive search of electronic databases such as ArXiv, Semantic Scholar, IEEE Xplore, Wiley Library, Scopus, Google Scholar, ACM, and Springer was carried out to find relevant literature in the subject domain. From 2016 to 2022, this study looked at white papers, conference papers, and journals. Only 25 research papers were chosen for this research paper describing trustworthy cybersecurity and explainable AI cybersecurity after we retrieved 800 articles from web databases. The study reveals that the decision tree technique outperforms other state-of-the-art machine learning models in terms of transparency and interpretability. Finally, this research suggests that incorporating explainable into machine learning cybersecurity models will help uncover the root causes of defensive failures, making it easier for cybersecurity experts to enhance both cybersecurity infrastructures and development, rather than just model results, policy, and management.KeywordsMachine learningTrustworthinessTrustworthy cybersecurity

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call