Abstract

Deep learning has been widely applied into image inpainting. However, traditional image processing methods (i.e., patch-based and diffusion-based methods) generally fail to produce visually natural contents and semantically reasonable structures due to ineffectively processing the high-level semantic information of images. To solve the problem, we propose a stacked generator networks assisted by patch discriminator for image inpainting by multistage. In the proposed method, our generator network mainly consists of three-layer stacked encoder-decoder architecture, which could fuse different level feature information and achieve image inpainting via a coarse-to-fine hierarchical representation. Meanwhile, we split the masked image into different patches in each layer, which could effectively enlarge the receptive field and extract more useful features of images. Moreover, the patch discriminator is introduced to judge the patches of inpainting image are real or fake. In this way, our network can effectively utilize the semantic information to complete a fine result. Furthermore, both perceptual loss and style loss are used to improve the inpainting results in verse. Experimental results on Places2 and Paris StreetView illustrate that our approach could generate high-quality inpainting results, and our method is more effective than the existing image inpainting methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.