Abstract

Face swapping represents a formidable challenge—it involves a technique that can seamlessly transplant the identity traits from a source figure into the target figure while meticulously preserving the remaining attributes of the target’s figure. In the paper, we present HQFace, our framework to achieve the high-quality face swapping. Our exploration revealed that within the latent codes of facial images, identity information are dispersed across a spectrum of feature dimensions to varying extents. Based on this discovery, we design an innovative adaptive ‘exploration-fusion’ mechanism that adaptive searches identity information across these dimensions, and deftly fuses together the threads of identity-specific and the remaining attributes of the target’s figure. To enhance the verisimilitude of the facial details even further, we integrated a dual encoding and decoding tactic into the aforementioned fusion process. This novel strategy mitigates the potential hue distortions in facial complexions, which may arise from encoding drifts compared to latent spatial anchors. HQFace is put to the test through a comprehensive series of experiments, which stands as a testament to the framework’s ability to deliver high-quality face swapping with strikingly true and devoid of artifacts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call