Abstract

Traditional image inpainting algorithms aim to obtain satisfactory results for the relatively small and regular missing regions. However, the problem of the image with the large and irregular holes has always been challenging. Our work focus is not on filling holes in the corrupted image, but on the more difficult task of semantic repair, which aims to predict the details of the large regions according to the context of surrounding pixels. In this paper, we propose a perceptual generative adversarial network for image generation and inpainting method. In the training stage, VGG pre-trained model is introduced to GAN network to make the generated image have more high-frequency features. This framework can not only synthesize images with perceptual reality but also make a better prediction of the corrupted images by using the encoding of images. The weighted context loss is adopted to repair the missing regions in image inpainting, and the adversarial loss is used to punish the perceptually unrealistic image. Experiments on natural scene data set CelebA show that our proposed method can produce higher quality repair results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call