Abstract

With the development of image generation and processing techniques, image inpainting techniques based on deep learning have achieved impressive results. Especially the emphasis on global context during inpainting enables the network to generate reasonably coarse inpainting results at low resolutions. However, how to better achieve high-quality texture filling at high resolutions is still a challenging problem. To address this problem, most methods design two-stage networks to achieve structure and texture restoration separately. But in the face of large-scale masks, the generated textures still suffer from blurring and artifacts. Therefore, in order to achieve inpainting of images with large-scale masks and generate fine textures, this paper proposes an end-to-end generative adversarial model for large mask inpainting, called Panoramic Feature Aggregation Network (PFAN). First, this paper designs a Euclidean Attention Mechanism (EAM) which exploits encoder features to generate low-resolution structure restoration. Then a Feature Aggregation Synthesis Block (FASB) is proposed in the decoder to achieve high-resolution complex texture filling. With the global receptive fields of these two modules, texture filling results with satisfactory performance even under large-scale masks. Experiments on CelebA-HQ, Paris Street View and FFHQ datasets show that the proposed method has superior performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call