Abstract

In today’s digital world, malware poses a serious threat to security and privacy by stealing sensitive data and disrupting computer systems. Traditional signature-based detection methods have become inefficient and time-consuming. However, data-driven AI techniques, particularly machine learning (ML) and deep learning (DL), have shown effectiveness in detecting malware by analyzing behavioral characteristics. Despite their promising performance, the black-box nature of these models requires improved explainability to facilitate their adoption in real-world applications. This can complicate the ability of cybersecurity experts to evaluate the model’s reliability. In this work, Explainable Artificial Intelligence (XAI) is employed to comprehend and evaluate the decisions made by machine learning models in the detection of malware on Android devices. To evaluate malware detection, experiments were conducted using CICMalDroid dataset by applying ML models like Logistic Regression and several tree algorithms. An overall 94% F1-score was achieved, and interpretable explanations for model decisions were provided, highlighting more critical features that contributed to accurate classifications. It was found that employing XAI techniques can provide valuable insights for malware analysis researchers, enhancing their understanding of the operations of the ML model, rather than solely focusing on improving accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.