Abstract

Deep-learning-based malware-detection models are threatened by adversarial attacks. This paper designs a robust and secure convolutional neural network (CNN) for malware classification. First, three CNNs with different pooling layers, including global average pooling (GAP), global max pooling (GMP), and spatial pyramid pooling (SPP), are proposed. Second, we designed an executable adversarial attack to construct adversarial malware by changing the meaningless and unimportant segments within the Portable Executable (PE) header file. Finally, to consolidate the GMP-based CNN, a header-aware loss algorithm based on the attention mechanism is proposed to defend the executive adversarial attack. The experiments showed that the GMP-based CNN achieved better performance in malware detection than other CNNs with around 98.61% accuracy. However, all CNNs were vulnerable to the executable adversarial attack and a fast gradient-based attack with a 46.34% and 34.65% accuracy decline on average, respectively. Meanwhile, the improved header-aware CNN achieved the best performance with an evasion ratio of less than 5.0%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call