Abstract
Image inpainting is a reconstruction method, where a corrupted image consisting of holes is filled with the most relevant contents from the valid region of an image. To inpaint an image, we have proposed a lightweight cascaded architecture with <i>2.5M parameters</i> consisting of encoder feature aggregation block (FAB) with decoder feature sharing (DFS) inpainting network followed by a refinement network. Initially, the FAB with DFS (inpainting) generator network is proposed which comprises of multi-level feature aggregation mechanism and feature sharing decoder. The FAB makes use of multi-scale spatial channel-wise attention to fuse weighted features from all the encoder levels. The DFS reconstructs the inpainted image with multi-scale and multi-receptive feature sharing in order to inpaint the image with smaller to larger hole regions effectively. Further, the refinement generator network is proposed for refining the inpainted image from the inpainting generator network. The effectiveness of proposed architecture is verified on CelebA-HQ [1], [2], Paris Street View (PARIS_SV) [3] and Places2 [4] datasets corrupted using publicly available NVIDIA mask dataset [5]. Extensive result analysis with detailed ablation study prove the robustness of the proposed architecture over state-of-the-art methods for image inpainting.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.