Abstract

Deep learning techniques have been successfully applied to network intrusion detection tasks, but as in the case of autonomous driving and face recognition, the reliability of the system itself has become a pressing issue. Robustness is a key attribute to determine whether a deep learning system is secure and reliable, and we also choose to explore the security of intrusion detection models from a new perspective of robustness quantification. In this paper, we focus on the intrusion detection model based on long and short-term memory, and use a fine-grained linear approximation method to derive a more accurate robustness bound on the nonlinear activation function with tighter linear constraints. We can use this bound to quantitatively measure the robustness of the detection model and determine whether the model is susceptible to the influence of adversarial samples. In our experiments, we test networks with various structures on the MNIST dataset, and the results show that our proposed method can effectively deduce the robustness bounds of output elements, and has good scalability and applicability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call