Abstract

This paper investigates a cross-scale space semantic feature coherent image inpainting approach since it is challenging for the existing image inpainting methods to fuse the semantic feature information effectively. Firstly, we learn the feature semantic relevance step-by-step from the high-level semantic feature map’s attention mechanism and then we apply what we have learned to the preceding low-level feature map. In order to preserve the visual and semantic coherence of image repair, the missing content can be filled by changing attention from deep to shallow in a multiscale manner. A broader receptive field is generated by partial convolution, and semantic feature relevance is achieved using a multiscale cross feature space feature attention mechanism based on semantic attention. This technique improves the extensibility and continuity of the restored images by reconstructing the semantic information of different feature spaces, not only taking into account the reuse of existing semantic space features but also including across feature spaces. The experimental results demonstrated an improvement in PSNR, SSIM, and L1 performance by 10.50%, 0.13%, and 47.09%, respectively, with clear benefits.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call