Abstract

In this paper, we propose a Mask-Robust Inpainting Network (MRIN) method to recover the masked areas of an image. Most existing methods learn a single model for image inpainting, under a basic assumption that all the masks belong to the same type. However, we discover that the masks are usually complex and exhibit various shapes and sizes at different locations of an image, where a single model cannot fully capture the large domain gap across different masks. To address this, we learn to decompose a complex mask area into several basic mask types and inpaint the damaged image in a patch-wise manner with a type-specific generator. More specifically, our MRIN consists of a mask-robust agent and an adaptive patch generative network. The mask-robust agent contains a mask selector and a patch locator, which generates mask attention maps to select a patch at each step. We train our mask-robust agent to learn the optimal inpainting patch route in a reinforcement learning manner by formulating the process of inpainting sequentially as a Markov decision process. Then, based on the predicted mask attention maps, the adaptive patch generative network inpaints the selected patch with the generators bank, so that it sequentially inpaints each patch with different patch generators according to its mask type. Extensive experiments demonstrate that our approach outperforms most state-of-the-art approaches on the Place2, CelebA, and Paris Street View datasets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.