Abstract

Face is one of the most attractive sensitive information in visual shared data. It is an urgent task to design an effective face deidentification method to achieve a balance between facial privacy protection and data utilities when sharing data. Most of the previous methods for face deidentification rely on attribute supervision to preserve a certain kind of identity-independent utility but lose the other identity-independent data utilities. In this article, we mainly propose a novel disentangled representation learning architecture for multiple attributes preserving face deidentification called replacing and restoring variational autoencoders (R2VAEs). The R2VAEs disentangle the identity-related factors and the identity-independent factors so that the identity-related information can be obfuscated, while they do not change the identity-independent attribute information. Moreover, to improve the details of the facial region and make the deidentified face blends into the image scene seamlessly, the image inpainting network is employed to fill in the original facial region by using the deidentified face as a priori. Experimental results demonstrate that the proposed method effectively deidentifies face while maximizing the preservation of the identity-independent information, which ensures the semantic integrity and visual quality of shared images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.