Abstract

Inpainting refers to reconstruct the incomplete image or video via analysing their context, feature of the tailing etc. Convolutional neural network with deep learning is proved to be an effective method to achieve inpainting. However, those algorithms existed now usually have vague and blurry results with huge amount of time to train the models. To address this issue, this article based on the construction of Context Encoders, continue to use the strategy of combining the encoders and generative adversarial networks (GANs), on which we add the global discriminator, consisting of the multi-scale adversarial network with the local discriminator altogether. The local discriminator ensures the local detail while the global discriminator guarantees the global consistency. Comparing with the Context Encoders, this network is simplified by reducing some of the redundant fabric, therefore this network is faster. Meantime, we re-calculate the loss function of the network and train it with the Paris dataset. The results proved that our network can achieve a better performance on street-view pictures than Context Encoders to some extent.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call