Abstract

Image inpainting is to fill the content and pixels of the missing area of the image so that it can achieve a semantic reasonable and realistic visual effect. Recently deep neural network has shown its significant advantages in filling large missing areas in image inpainting tasks. These methods can generate reasonable inpainting structures and textures, but they usually produce distorted structures or fuzzy textures inconsistent with the background. To solve these problems, we propose a new progressive generation network for image inpainting. The proposed model does not only generate new image structures but also effectively combine the context information of missing areas to predict the content better. The model is a feed-forward complete convolution neural network that divides the image inpainting task into a progressive generation process from coarse to fine. It introduces structural reconstruction loss and style loss in different stages to meet the requirements of different inpainting tasks. Extensive experiments show that the proposed method produces higher quality than the existing methods, especially for the broad area missing face image inpainting task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call