Recently, the frequency and complexity of ransomware attacks have been increasing steadily, posing significant threats to individuals and organizations alike. While traditional signature-based antiransomware systems are effective in the detection of known threats, they struggle to identify new ransomware samples. To address this limitation, many researchers have focused on analyzing the behavior and actions of executables. During this dynamic analysis process, various dynamic-based features emerge, offering different perspectives on the executable's behavior, including Application Program Interface (API) call sequences, dynamic link libraries (DLLs), and mutual exclusions. Existing methods mostly perform machine or deep learning models for feature engineering and detection. These methods usually perform learning according to a single perspective or by combining data from different perspectives into the frequency domain. In this case, they may ignore the information from the other aspects or the sequence relationship between the features. In addition, learning models used in these solutions are mostly incomprehensible to humans, which could be an obstacle in terms of having an insight through the model's mentality and also ransomware's way of work. In this study, we provide XRan (eXplainable deep learning-based RANsomware detection using dynamic analysis), an Explainable Artificial Intelligence (XAI) supported ransomware detection system that combines different dynamic analysis-based sequences, each representing a different view of the executable, in order to enrich the feature space. XRan employs a Convolutional Neural Network (CNN) architecture to detect ransomware and two XAI models as Interpretable Model-Agnostic Explanations (LIME), and SHapley Additive exPlanations (SHAP) to provide local and global explanations for detection. Experimental results demonstrate that XRan provides up to 99.4% True Positive Rate (TPR), and outperforms the state-of-the-art methods.
Read full abstract