Abstract

Currently, most image inpainting algorithms based on deep learning cause information loss when acquiring deep level features. It is not conducive to the image inpainting of texture details and often ignore the image restoration of semantic features, which generates image inpainting results with unreasonable structures. To address those problems, we have proposed improved image inpainting network using multi-scale feature module and improved attention module. Firstly, we propose the multi-scale fusion module based on dilated convolution to reduce information loss during the convolution process by fusing multi-scale features when acquiring image deep level features. Then, the attention module can strengthen the ability of image semantic inpainting and ensure that the proposed model can generate clear texture inpainting results. To ensure the consistency of image inpainting details and styles, the style loss and perceptual loss functions are introduced in the proposed network. The qualitative experimental results on CelebA-HQ, Places2 and Outdoor Scene datasets and common evaluation metrics such as PSNR, SSIM and FID. These metrics can validate the proposed method to be superior to the state-of-the-arts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call