Abstract
Semantic image inpainting focuses on the completing task of high-level missing regions at the basis of the uncorrupted image. The classical methods of image inpainting can only deal with low-level or mid-level missing regions due to the lack of representation of the image. In the essay, we conclude a new method of semantic image inpainting. It’s based on the generative model with learning the representation of image database. We propose an architecture of completion model using perceptual loss and contextual loss based on generative adversarial networks after having trained generative model using DCGAN. We qualitatively and quantitatively explore the effect of missing regions of different types and sizes on image inpainting. Our method successfully completes inpainting tasks in large missing regions and results looks realistic with extensive experiments. We conclude that the performance of our model mostly is good when completing image corrupted with the mask with an area of less than 50% as well as with center or random masks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.