Abstract

A number of artworks have been damaged to some extent, over time, which greatly affect their visual quality. Therefore, it's a very valuable and meaningful work to repair them. We proposed a deep network architecture — Image Inpainting Conditional Generative Adversarial Network (II-CGAN) to address this problem. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between the damaged and repaired image detail layers from data. Since the intact image corresponding to the real-world damaged image is not available, we synthesize images with lost blocks for training. To minimize the lost of information and ensure better visual quality, a new refined network architecture is introduced. We made a thorough evaluation of the Generator of increased depth(22 layers) using an architecture with the units consisting of 3 × 3 and 4 χ 4 convolution filters, and the Discriminator with small(3 χ 3) convolution kernel used instead of 4 χ 4 in all convolution filters. Experiments results prove that the method in this paper achieves better objective and subjective performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call