Abstract

In the past, GAN-based face reenactment methods concentrated mostly on transferring the facial expressions and positions of the source. However, the generated results were susceptible to blurring in some minute details of the face, such as teeth and hair, and their backgrounds were also not guaranteed to be consistent with the manipulated images in terms of light and shadow. Because of these issues, the generated results could be distinguishable as fakes. In this paper, we proposed a landmark based method named HR-Net, which can render source facial expressions and postures on any identity and simultaneously generate realistic face details. Firstly, a lightweight landmark identity conversion module (LIC) was designed to address the identity leakage problem, and it represented facial expressions and poses with only 68 2D landmarks. On this basis, a boundary-guided face reenactment module (BFR) was presented to only learn the background of the reference images; thus, the results generated by BFR can be consistent with the reference images’ light and shadow. Moreover, a novel local perceptual loss function was implemented to support the BFR module in generating more realistic details. Extensive experiments demonstrated that our method achieved the state of the art.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.