Abstract

Due to the outstanding performance of deep neural networks (DNNs), many researchers have begun to transfer deep learning techniques to their fields. To detect algorithmically generated domains (AGDs) generated by domain generation algorithm (DGA) in botnets, a long short-term memory (LSTM)-based DGA detector has achieved excellent performance. However, the previous DNNs have found various inherent vulnerabilities, so cyberattackers can use these drawbacks to deceive DNNs, misleading DNNs into making wrong decisions. Backdoor attack as one of the popular attack strategies strike against DNNs has attracted widespread attention in recent years. In this paper, to cheat the LSTM-based DGA detector, we propose BadDGA, a backdoor attack against the LSTM-based DGA detector. Specifically, we offer four backdoor attack trigger construction methods: TLD-triggers, Ngram-triggers, Word-triggers, and IDN-triggers. Finally, we evaluate BadDGA on ten popular DGA datasets. The experimental results show that under the premise of 1‰ poisoning rate, our proposed backdoor attack can achieve a 100% attack success rate to verify the effectiveness of our method. Meanwhile, the model’s utility on clean data is influenced slightly.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.