Abstract
Making adversarial samples to fool deep neural network (DNN) is an emerging research direction of privacy protection, since the output of the attacker's DNN can be easily changed by the well-designed tiny perturbation added to the input vector. However, the added perturbation is meaningless. Why not embed some useful information to generate adversarial samples while integrating the functions of copyright and integrity protection of data hiding? This paper solves the problem by modifying only one pixel of the image, that is, data hiding and adversarial sample generation are achieved simultaneously by the only one modified pixel. In CIFAR-10 dataset, 11 additional bits can be embedded into the host images sized 32 × 32, and the successful rate of adversarial attack is close to the state-of-the-art works. This paper proposes a new idea to combine data hiding and adversarial sample generation, and gives a new method for privacy-preserved processing of image big data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.