Abstract

This paper proposes an effective image inpainting method using an improved deep convolutional auto-encoder network. By analogy with exiting methods of image inpainting based on auto-decoders, inpainting methods using the deep convolutional auto-encoder networks are significantly more effective in capturing high-level features than classical methods based on exemplar. However, the inpainted regions would appear blurry and global inconsistency. To alleviate the fuzzy problem, we improved the network model by adding skip connections between mirrored layers in encoder and decoder stacks, so that the generative process of the inpainting area can directly use the low-level features information of the processing image. For making the inpainted result look both more plausible and consistent with its surrounding contexts, the model is trained with a combination of standard pixel-wise reconstruction loss and two adversarial losses which ensures pixel-accurate and local-global contents consistency. With extensive experimental on the ImageNet and Paris Streetview datasets, we demonstrate qualitatively and quantitatively that our approach performs better than state of the art.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.