Abstract

Recent years’ development of AI technology brings more convenience to our life while at the same time increasing the risk of personal information leakage. In this work, we try to protect personal information contained in the images by generating adversarial examples to fool the image captioning models. The generated adversarial examples are user-oriented which means the users can manipulate or hide sensitive information on the text output as they wish. By doing so, our personal information can be well protected from image captioning models. To fulfill the task, we adopt five kinds of adversarial attack. Experimental results show our method can successfully protect user security. The Pytorch® implementations can be downloaded from an open-source GitHub project (https://github.com/Dlut-lab-zmn/Image-Captioning-Attack/).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call