Abstract
A number of artworks have been damaged to some extent, over time, which greatly affect their visual quality. Therefore, it's a very valuable and meaningful work to repair them. We proposed a deep network architecture — Image Inpainting Conditional Generative Adversarial Network (II-CGAN) to address this problem. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between the damaged and repaired image detail layers from data. Since the intact image corresponding to the real-world damaged image is not available, we synthesize images with lost blocks for training. To minimize the lost of information and ensure better visual quality, a new refined network architecture is introduced. We made a thorough evaluation of the Generator of increased depth(22 layers) using an architecture with the units consisting of 3 × 3 and 4 χ 4 convolution filters, and the Discriminator with small(3 χ 3) convolution kernel used instead of 4 χ 4 in all convolution filters. Experiments results prove that the method in this paper achieves better objective and subjective performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.