Abstract

The existing deep-learning-based image inpainting algorithms often suffer from local structure disconnections and blurring when dealing with large irregular defective images. To solve these problems, an image structure-induced semantic pyramid network for inpainting is proposed. The model consists of two parts: the edge inpainting network and the content-filling network. U-Net-based edge inpainting network restores the edge of the image defect with residual blocks. The edge inpainting map is input into the pyramid content-filling network together with the image in the prior condition. In the content-filling network, the attention transfer module (ATM) is designed to reconfigure the encoding features of each scale step by step, and the recovered feature map is linked to the decoding layer and the corresponding potential feature fusion decoding to improve the global consistency of the image and finally obtain the restored image. The quantitative analysis shows that the average L1 loss is reduced by about 1.14%, the peak signal-to-noise ratio (PSNR) is improved by about 3.51, and the structural similarity (SSIM) is improved by about 0.163 on the CelebA-HQ and Places2 datasets compared with the current mainstream algorithms. The qualitative analysis shows that this model not only generates semantically sound content as a whole but also better matches the human visual perception in terms of local structural connectivity and texture synthesis.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.