Abstract

Making adversarial samples to fool deep neural network (DNN) is an emerging research direction of privacy protection, since the output of the attacker's DNN can be easily changed by the well-designed tiny perturbation added to the input vector. However, the added perturbation is meaningless. Why not embed some useful information to generate adversarial samples while integrating the functions of copyright and integrity protection of data hiding? This paper solves the problem by modifying only one pixel of the image, that is, data hiding and adversarial sample generation are achieved simultaneously by the only one modified pixel. In CIFAR-10 dataset, 11 additional bits can be embedded into the host images sized 32 × 32, and the successful rate of adversarial attack is close to the state-of-the-art works. This paper proposes a new idea to combine data hiding and adversarial sample generation, and gives a new method for privacy-preserved processing of image big data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call