Abstract
Face image features represent significant user privacy concerns. Face images cannot be privately transferred under existing privacy protection methods, and data across various social networks are unevenly distributed. This paper proposes a method for face image privacy protection based on federated learning and ensemble models. A federated learning model based on distributed data sets was established by means of federated learning. On the client side, a local facial recognition model was obtained by local face data training and used as the input of PcadvGAN to train PcadvGAN for several rounds. On the server side, a parameter aggregator based on a differential evolutionary algorithm was established as the discriminator of PcadvGAN server, and a client facial recognition model was ensembled simultaneously. The discriminator of the PcadvGAN server experienced mutation, crossover, and interaction with the ensemble model to reveal the optimal global weight of the PcadvGAN model. Finally, the global optimal aggregation parameter matrix of PcadvGAN was obtained by calculation. The server and the client shared the global optimal aggregation parameter matrix, enabling each client to generate private face images with high transferability and practicality. Targeted attack and non-targeted attack experiments demonstrated that the proposed method can generate high-quality, transferable, robust, private face images with only minor perturbations more effectively than other existing methods.
Highlights
Social network usage prevails among Internet users wishing to express themselves, contact friends, and conduct business activities
Liu et al [16] and Tramèr et al [17] established a method wherein multiple models of different structures are simultaneously subjected to adversarial training to generate adversarial examples with the same strength, which can effectively improve the transferability of adversarial examples and protect private face images
When VGG19 served as the test model, the generated private face images made for 0% accuracy of the three facial recognition models in the training model set
Summary
Social network usage prevails among Internet users wishing to express themselves, contact friends, and conduct business activities. When perturbing the features in a face image, all of the above adversarial sample attack methods seek to reduce the accuracy of the potential facial recognition model in social networks via single-model adversarial training and transferable adversarial examples. Liu et al [16] and Tramèr et al [17] established a method wherein multiple models of different structures are simultaneously subjected to adversarial training to generate adversarial examples with the same strength, which can effectively improve the transferability of adversarial examples and protect private face images. This method requires high-quality face image training data in large quantities; data from major social networks must be shared and integrated for it to function properly. If a user is new to the social network or has lost their data under other circumstances, the proposed method can use their data from other social networks to generate a private face image for them
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.