Abstract

Nowadays, image inpainting methods based on deep learning would lead to information loss when acquiring deep features, which is not conducive to the restoration of texture details and ignores the inpainting of semantic features. Besides, great majority of them generate inpainting results with unreasonable structures. In response to the above problems, an image inpainting network based on a multi-scale feature joint attention model is proposed. First of all, an image inpainting network using multi-scale feature joint attention model is advanced. When acquiring image depth features, multi-scale feature fusion is used to reduce the loss of information in the convolution process. Afterwards, a joint attention mechanism not only strengthens the model’s ability to repair image semantics, but also ensures the model could generate inpainting results with distinct texture. Last but not least, the style loss and perceptual loss are introduced into the network for the purpose of ensuring the consistency of the detail and style of the inpainting results. The qualitative experimental results on the CelebA-HQ and Places2 datasets and commonly used evaluation indicators such as PSNR and SSIM verify the method is superior to the existing image inpainting methods. Compared with the comparison methods, the proposed method improves PSNR and SSIM by 0.4%−6% and 0.4%−3% respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call