Abstract

Image inpainting is one of the research hotspots in the field of computer vision and image processing. The image inpainting methods based on deep learning models had made some achievements, but it is difficult to achieve ideal results when dealing with images with the relationship between global and local attributions. In particular, when repairing a large area of image defects, the semantic rationality, structural coherence, and detail accuracy of results need to be improved. In view of the existing shortcomings, this study has proposed an improved image inpainting model based on a fully convolutional neural network and generative countermeasure network. A novel image inpainting algorithm via network has been proposed as a generator to repair the defect image, and structural similarity had introduced as the reconstruction loss of image inpainting to supervise and guide model learning from the perspective of the human visual system to improve the effect of image inpainting. The improved global and local context discriminator networks had used as context discriminators to judge the authenticity of the repair results. At the same time, combined with the adversarial loss, a joint loss has proposed for the training of the supervision model, which makes the content of the real and natural repair area and has attribute consistency with the whole image. To verify the effectiveness of the proposed image inpainting model, the image inpainting effect is compared with the current mainstream image inpainting algorithm on the CelebA-HQ dataset based on subjective and objective indicators. The experimental results show that the proposed method has made progress in semantic rationality, structural coherence, and detail accuracy. The proposed model has a better understanding of the high-level semantics of image, and a more accurate grasp of context and detailed information.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.