Abstract

Along with recent achievements in deep learning empowered by enormous amounts of training data, preserving the privacy of an individual related to the gathered data has been becoming an essential part of the public data collection and publication. Advancements in deep learning threaten traditional image anonymization techniques with model inversion attacks that try to reconstruct the original image from the anonymized image. In this paper, we propose a privacy-preserving adversarial protector network (PPAPNet) as an image anonymization tool to convert an image into another synthetic image that is both realistic and immune to model inversion attacks. Our experiments on various datasets show that PPAPNet can effectively convert a sensitive image into a high-quality and attack-immune synthetic image.

Highlights

  • Stimulated by recent achievements in deep learning in different research domains such as video recommendation [9], facial recognition [36], and medical diagnosis [15], [39], [43], many companies and researchers are interested in using their own data to train state-of-the-art machine learning models

  • Instead of doing the hard work of trying to apply objective perturbations on generative adversarial networks (GAN) to generate a synthetic image in a differentially private way, we developed an advanced mechanism for traditional image anonymization, adding noise to an image

  • We propose an image anonymization deep neural network, privacy-preserving adversarial protector network (PPAPNet), that transforms an image into another synthetic image by adding optimized noise to the original image’s latent space representation

Read more

Summary

INTRODUCTION

Stimulated by recent achievements in deep learning in different research domains such as video recommendation [9], facial recognition [36], and medical diagnosis [15], [39], [43], many companies and researchers are interested in using their own data to train state-of-the-art machine learning models. Instead of doing the hard work of trying to apply objective perturbations on GAN to generate a synthetic image in a differentially private way, we developed an advanced mechanism for traditional image anonymization, adding noise to an image. We propose a privacy-preserving adversarial protector network (PPAPNet) as a tool to anonymize an image at the latent space level to simultaneously provide privacy and utility. We propose an image anonymization deep neural network, PPAPNet, that transforms an image into another synthetic image by adding optimized noise to the original image’s latent space representation. Experimental results empirically demonstrate that our proposed PPAPNet leads to a higher level of image anonymization

BACKGROUND
DIFFERENTIAL PRIVACY
ADVERSARIAL TRAINING
NOISE AMPLIFIER
IMPLEMENTATION DETAILS
EXPERIMENTS
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.