Abstract
Face swapping represents a formidable challenge—it involves a technique that can seamlessly transplant the identity traits from a source figure into the target figure while meticulously preserving the remaining attributes of the target’s figure. In the paper, we present HQFace, our framework to achieve the high-quality face swapping. Our exploration revealed that within the latent codes of facial images, identity information are dispersed across a spectrum of feature dimensions to varying extents. Based on this discovery, we design an innovative adaptive ‘exploration-fusion’ mechanism that adaptive searches identity information across these dimensions, and deftly fuses together the threads of identity-specific and the remaining attributes of the target’s figure. To enhance the verisimilitude of the facial details even further, we integrated a dual encoding and decoding tactic into the aforementioned fusion process. This novel strategy mitigates the potential hue distortions in facial complexions, which may arise from encoding drifts compared to latent spatial anchors. HQFace is put to the test through a comprehensive series of experiments, which stands as a testament to the framework’s ability to deliver high-quality face swapping with strikingly true and devoid of artifacts.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.