Abstract

Most of recent generative image inpainting methods have shown promising performance by adopting attention mechanisms to fill hole regions with known-region features. However, these methods tend to neglect the impact of reliable hole-region information, which leads to discontinuities in structure and texture of final results. Besides, they always fail to predict plausible contents with realistic details in hole regions due to the ineffectiveness of vanilla decoder in capturing long-range information at each level. To handle these problems, we propose a confidence-based global attention guided network (CGAG-Net) consisting of coarse and fine steps, where each step is built upon the encoder-decoder architecture. CGAG-Net utilizes reliable global information to missing contents through an attention mechanism, and uses attention scores learned from high-level features to guide the reconstruction of low-level features. Specifically, we propose a confidence-based global attention layer (CGA) embedded in the encoder to fill hole regions with reliable global features weighted by learned attention scores, where reliability of features is measured by automatically generated confidence values. Meanwhile, the attention scores learned by CGA are repeatedly used to guide the feature prediction at each level of the attention guided decoder (AG Decoder) we proposed. Thus, AG Decoder can obtain semantically-coherent and texture-coherent features from global regions to predict missing contents. Extensive experiments on Paris StreetView and CelebA datasets validate the superiority of our proposed approach through quantitative and qualitative comparisons with existing methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.