Abstract

The complexity of today’s web applications entails many security risks, mainly targeted attacks on zero-day vulnerabilities. New attack types often disable the detection capabilities of intrusion detection systems (IDS) and web application firewalls (WAFs) based on traditional pattern matching rules. Therefore, the need for new generation WAF systems using machine learning and deep learning technologies is urgent today. Deep learning models require an enormous amount of input data to be able to train the models accurately, leading to the very resource-intensive problem of collecting and labeling data. In addition, web request data is often sensitive or private and should not be disclosed, imposing a challenge to develop high-accuracy deep learning and machine learning models. This paper proposes a privacy-preserving distributed training process for the web attack detection deep learning model. The proposed model allows the participants to share the training process to improve the accuracy of the deep model for web attack detection while preserving the privacy of the local data and local model parameters. The proposed model uses the technique of adding noise to the shared parameter to ensure differential privacy. The participants will train the local detection model and share intermediate training parameters with some noise that increases the privacy of the training process. The results evaluated on the CSIC 2010 benchmark dataset show that the detection accuracy is more than 98%, which is close to the model that does not guarantee privacy and is much higher than the maximum accuracy of all non-data-sharing local models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call