Although most existing methods using Generative Adversarial Networks (GAN) generally produce plausible results, there is a significant amount of artifacts and less-than-ideal restoration of textures when large regions are missing or the background of missing regions is complex. To address this issue, in this paper, we propose a novel texture-aware backbone net named RFA-Net for finer texture image inpainting. Compared to conventional encoder–decoder methods, our main contribution is proposing a novel RFA-Net adopt a non-pooling residual CNN structure with three novel modules, which retains texture features from shallow layers and adaptively learn the importance of certain channels and locations of features that may potentially benefit image inpainting. In addition, we propose a hybrid loss optimization (HLO) module to enable the generator to focus on the semantic and texture details of the inpainted contents. Experimental results demonstrate that our RFA-Net is able to recover texture details and ground-truth consistent images, and outperforms the state-of-the-art methods both in terms of image quality and quantitative metrics. Our source code and data are available online at https://github.com/Jamie-61/RFA-Net-Inpainting.
Read full abstract