Abstract

Existing image inpainting algorithms based on neural network models are affected by structural distortions and blurred textures on visible connectivity. As a result, overfitting and overlearning phenomena can easily emerge during the image inpainting procedure. Image inpainting refers to the repairing of missing parts of an image, given an image that is broken or incomplete. After the repairing operation is complete, there are obvious signs of repair in damaged areas, semantic discontinuities, and unclearness. This paper proposes an improved image inpainting method based on a new encoder combined with a context loss function. In order to obtain clear repaired images and ensure that the semantic features of images are fully learned, a generative network based on the fusion model of squeeze-and-excitation networks deep residual learning has been proposed to improve the application of network features in order to obtain clear images and reduce network parameters. At the same time, a discriminative network based on the squeeze-and-excitation residual Network has been proposed to strengthen the capability of the discriminative network. In order to make the generated image more realistic, so that the restored image will be more similar to the original image, a joint context-awareness loss training method (contextual perception loss network) has also been proposed to generate the similarity of the local features of the network constraint, with the result that the repaired image is closer to the original picture and more realistic. The experimental results can demonstrate that the proposed algorithm demonstrates better adaptive capability than the comparison algorithms on a number of image categories. In addition, the processing results of the image inpainting procedure were also superior to those of five state-of-the-art algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call