Abstract
With the development of image generation and processing techniques, image inpainting techniques based on deep learning have achieved impressive results. Especially the emphasis on global context during inpainting enables the network to generate reasonably coarse inpainting results at low resolutions. However, how to better achieve high-quality texture filling at high resolutions is still a challenging problem. To address this problem, most methods design two-stage networks to achieve structure and texture restoration separately. But in the face of large-scale masks, the generated textures still suffer from blurring and artifacts. Therefore, in order to achieve inpainting of images with large-scale masks and generate fine textures, this paper proposes an end-to-end generative adversarial model for large mask inpainting, called Panoramic Feature Aggregation Network (PFAN). First, this paper designs a Euclidean Attention Mechanism (EAM) which exploits encoder features to generate low-resolution structure restoration. Then a Feature Aggregation Synthesis Block (FASB) is proposed in the decoder to achieve high-resolution complex texture filling. With the global receptive fields of these two modules, texture filling results with satisfactory performance even under large-scale masks. Experiments on CelebA-HQ, Paris Street View and FFHQ datasets show that the proposed method has superior performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.