Abstract
SummaryDeep learning methods have become the basis of new AI‐based services on the Internet in big data era because of their unprecedented accuracy. Meanwhile, it raises obvious privacy issues. The deep learning–assisted privacy attack can extract sensitive personal information not only from the text but also from unstructured data such as images and videos. In this paper, we proposed a framework to protect image privacy against deep learning tools, along with two new metrics that measure image privacy. Moreover, we propose two different image privacy protection schemes based on the two metrics, utilizing the adversarial example idea. The performance of our solution is validated by simulations on two different datasets. Our research shows that we can protect the image privacy by adding a small amount of noise that has a humanly imperceptible impact on the image quality, especially for images of complex structures and textures.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Concurrency and Computation: Practice and Experience
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.