Abstract
Deep learning-based methods especially using convolutional neural network (CNN) and generative adversarial network (GAN) have achieved certain success for the task of image inpainting. The previous methods usually try to generate the content in the missing areas from scratch. However, these methods have difficulty in producing salient image structures that appear natural and consistent with the neighborhood, especially when the missing area is large. In this paper, we address the challenge by introducing edges into the convolutional GAN-based inpainting. We split the inpainting task into two steps: first edge generation, then edge-based image generation. We adopt CNN to accomplish the two steps and use GAN-based training, thus our method is named E2I: generative inpainting from edge to image. Specifically, we adopt a deep network-based edge detector to achieve an edgeness map of an incomplete image, then we fill-in the missing areas in the edgeness map, and finally generate the missing pixels with the assistance of the complete edgeness map. We verify the proposed method on three challenging image datasets: Places2, ImageNet, and CelebA. We also compare our method with the state-of-the-arts on the Places2 test set. Our experimental results demonstrate the superior performance of our method in producing more plausible inpainting results.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.