Abstract

Deep learning (DL) has exhibited its exceptional performance in fields like intrusion detection. Various augmentation methods have been proposed to improve data quality and eventually to enhance the performance of DL models. However, the classic augmentation methods cannot be applied to those DL models which exploit the system-call sequences to detect intrusion. Previously, the seq2seq model has been explored to augment system-call sequences. Following this work, we propose a gated convolutional neural network (GCNN) model to thoroughly extract the potential information of augmented sequences. Also, in order to enhance the model’s robustness, we adopt adversarial training to reduce the impact of adversarial examples on the model. Adversarial examples used in adversarial training are generated by the proposed adversarial sequence generation algorithm. The experimental results on different verified models show that GCNN model can better obtain the potential information of the augmented data and achieve the best performance. Furthermore, GCNN with adversarial training can enhance robustness significantly.

Highlights

  • An intrusion detection system (IDS) is a kind of active defense technique with the aim of resisting malware and sensitive activities

  • We propose a gated convolution intrusion detection model (GCNN) to unearth the prediction sequence based on the system-call sequence augmentation method (Lv et al 2018)

  • We propose a system-call adversarial sequence generation algorithm based on Fast Gradient Sign Method (FGSM) (Goodfellow et al 2015) and we employ the adversarial examples generated by the proposed algorithm for adversarial training to better defend the attack of adversarial examples on gated convolutional neural network (GCNN) model

Read more

Summary

Introduction

An intrusion detection system (IDS) is a kind of active defense technique with the aim of resisting malware and sensitive activities. We propose a gated convolution intrusion detection model (GCNN) to unearth the prediction sequence based on the system-call sequence augmentation method (Lv et al 2018). We will construct a white-box adversarial sequence generation algorithm based on FGSM to craft adversarial examples for adversarial training to enhance the model’s robustness. LSTM recurrent neural network (Hochreiter and Schmidhuber 1997; Kim et al 2016; Hao et al 2019) and gated recurrent unit (GRU) network (Xu et al 2018) were applied to get better performance than vanilla RNN At this point, the researchers considered migrating the relevant technologies of natural language processing to the field of intrusion detection (Cho et al 2014). We prefer a white-box attack to generate adversarial sequences by adding perturbation in the padding part, and we make GCNN more robust by adversarial training

Methods
Findings
Conclusion and future work
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.