Abstract

Deep neural networks (DNN) have achieved state-of-art performance on image classification and pattern recognition in recent years, and also show its power on steganalysis field. But research revealed that the DNN can be easily fooled by adversarial examples generated by adding perturbation to input. Deep steganalysis neural networks have the same potential threat as well. In this paper we discuss and analysis two different attack methods and apply the methods in attacking on deep steganalysis neural networks. We defined the model and propose the concrete attack steps, the result shows that the two methods have 96.02% and 90.25% success ratio separately on the target DNN. Thus, the adversarial example attack is valid for deep steganalysis neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call