Abstract

Deep steganography (DS), using neural networks to hide one image in another, has performed well in terms of invisibility, embedding capacity, etc. Current steganalysis methods for DS can only detect or remove secret images hidden in natural images and cannot analyze or modify secret content. Our technique is the first approach to not only effectively prevent covert communications using DS, but also analyze and modify the content of covert communications. We proposed a novel adversarial attack method for DS considering both white-box and black-box scenarios. For the white-box attack, several novel loss functions were applied to construct a gradient- and optimizer-based adversarial attack that could delete and modify secret images. As a more realistic case, a black-box method was proposed based on surrogate training and a knowledge distillation technique. All methods were tested on the Tiny ImageNet and MS COCO datasets. The experimental results showed that the proposed attack method could completely remove or even modify the secret image in the container image while maintaining the latter’s high quality. More importantly, the proposed adversarial attack method can also be regarded as a new DS approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.