Abstract

Most face-inpainting methods perform well in face repair. However, these methods can only complete a single face image per input. Although existing various image-inpainting methods can achieve pluralistic image inpainting, they typically produce faces with distorted structures or the same texture. To resolve these shortcomings and achieve high-quality diverse face inpainting, we propose PFTANet, a two-stage pluralistic face-inpainting network that transforms attribute information. In the first stage, the face-parsing network is fine-tuned to obtain semantic facial region information. In the second stage, a generator consisting of SNBlock, CF_ShiftBlocks, and CF_MergeBlock, which ensures that high-quality pluralistic face results are generated, is used. Specifically, CF_ShiftBlocks completes pluralistic face generation by transforming the attribute information from the conditional face extracted by the attribute extractor and ensuring the consistency of the attribute information between the conditional and generated faces. CF_MergeBlock ensures structural consistency between the masked and background regions of the generated face using facial region semantic information. A multi-patch discriminator is used to enhance facial detail generation. Experimental results for the CelebA and CelebA-HQ datasets indicated that PFTANet achieved pluralistic and visually realistic face inpainting.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.