Abstract

When repairing masked images based on deep learning, there is usually insufficient representation of multi-level information and inadequate utilization of long distance features. To solve the problems, this paper proposes a second-order generative image inpainting model based on Structural Constraints and Multi-scale Feature Fusion (SCMFF). The SCMFF model consists of two parts: edge repair network and image inpainting network. The edge repair network combines the auto-encoder with the Dilated Residual Feature Pyramid Fusion (DRFPF) module, which improves the representation of multi-level semantic information and structural details of images, thus achieves better edge repair. Then, the image inpainting network embeds the Dilated Multi-scale Attention Fusion (DMAF) module in the auto-encoder for texture synthesis with the real edge as the prior condition, and achieves fine-grained inpainting under the edge constraint by aggregating the long-distance features of different dimensions. Finally, the edge repair results are used to replace the real edge, and the two networks are fused and trained to achieve end-to-end repair from the masked image to the complete image. The model is compared with the advanced methods on datasets including Celeba, Facade and Places2. The quantitative results show that the four metrics of LPIPS, MAE, PSNR and SSIM are improved by 0.0124-0.0211, 3.787-6.829, 2.934dB-5.730dB and 0.034-0.132, respectively. The qualitative results show that the edge distribution in the center of the hole reconstructed by the SCMFF model is more uniform, and the texture synthesis effect is more in line with human visual perception.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call