Abstract
In the era of Artificial Intelligence Generated Content (AIGC), face forgery models pose significant security threats. These models have caused widespread negative impacts through the creation of forged products targeting public figures, national leaders, and other Persons-of-interest (POI). To address this, we propose the Face Omron Ring (FOR) to proactively protect the POI from face forgery. Specifically, by introducing FOR into a target face forgery model, the model will proactively refuse to forge any face image of protected identities without compromising the forgery capability for unprotected ones. We conduct extensive experiments on 4 face forgery models, StarGAN, AGGAN, AttGAN, and HiSD on the widely used large-scale face image datasets CelebA, CelebA-HQ, and PubFig83. Our results demonstrate that the proposed method can effectively protect 5000 different identities with a 100% protection success rate, for each of which only about 100 face images are needed. Our method also shows great robustness against multiple image processing attacks, such as JPEG, cropping, noise addition, and blurring. Compared to existing proactive defense methods, our method offers identity-centric protection for any image of the protected identity without requiring any special preprocessing, resulting in improved scalability and security. We hope that this work can provide a solution for responsible AIGC companies in regulating the use of face forgery models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.