Abstract
The critical challenge of image inpainting is to infer reasonable semantics and textures for a corrupted image. Typical methods for image inpainting are built upon some prior knowledge to synthesize the complete image. One potential limitation is that those methods often remain undesired blurriness or semantic mistakes in the synthesized image while handling images with large corrupted areas. In this paper, we propose a Collaborative Contrastive Learning-based Generative Model ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C2LGM</i> ), which learns the content consistency in the same image to ensure that the inferred content of corrupted areas is reasonable compared to the known content by pixel-level reconstruction and high-level semantic reasoning. <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C2LGM</i> leverages the encoder-decoder based framework to directly learn the mapping from the corrupted image to the intact image and perform the pixel-level reconstruction. To perform semantic reasoning, our <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C2LGM</i> introduces a Collaborative Contrastive Learning ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C2L</i> ) mechanism that learns high-level semantic consistency between inferred and known content. Specifically, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C2L</i> mechanism introduces the high-frequency edge maps to participate in the process of typical contrastive learning and enables the deep model to ensure the semantic reasonableness between high-frequency structures and pixel-level content by pushing the representations of inferred content and known content close and keeping unrelated semantic content away in the latent feature space. Moreover, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C2LGM</i> also directly absorbs the prior knowledge of structural information from the proposed structural spatial attention module, and leverages the texture distribution sampling to improve the quality of synthesized content. As a result, our <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C2LGM</i> achieves a 0.42 dB improvement over competing methods in terms of the PSNR metric while coping with a 40~ 50% corruption ratio in the Places2 dataset. Extensive experiments on three benchmark datasets, including Paris Street View, CelebA-HQ, and Places2, demonstrate the advantages of our proposed <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C2LGM</i> over other state-of-the-art methods for image inpainting both qualitatively and quantitatively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.