In certain specific scenarios, there is a risk of privacy leakage in terms of the soft biometric attributes on a person’s face. However, existing face privacy-enhancing techniques suffer from limited generalizability, meaning that they can only induce misclassification in a specific classifier but fail to generalize this effect well to arbitrary attribute classifiers. Moreover, existing methods reverse attributes to improve face privacy, but this may result in privacy recovery. To address those problems, we propose GFPNet, a novel privacy-enhancing model that can provide generalizable and reliable privacy to face images. The key factor for improving generalizability is that GFPNet uses defense training, which is an effective way to improve model robustness, to dynamically strengthen the mediocre auxiliary attribute classifier during iterative training. Specifically, the generalizability of GFPNet is enhanced in the game between attack and defense, where the generator attempts to deceive the auxiliary attribute classifier and the classifier defends against the generator’s attack by defense training. Furthermore, instead of reversing attributes, skewing attributes to one side is used to avoid attribute recovery. GFPNet also integrates a face matcher, multi-scale discriminator, and Demiguise Attack to improve face matching and image quality. Extensive experiments demonstrate that GFPNet has excellent generalizability to arbitrary attribute classifiers and satisfactory face-matching utility.
Read full abstract