Abstract

Image inpainting refers to the process of filling in missing regions or removing objects, and has broad application prospects. The rapid development of deep learning has led to new technological breakthroughs in image repair technology, continuously improving the quality of image inpainting. However, when we inpaint large missing regions, the texture and structural features of the image cannot be comprehensively utilized. This leads to blurry images. To solve this problem, we propose an improved dual-stream U-Net algorithm that adds an attention mechanism to the two U-Net networks known as a dual AU-Net network to improve the texture details of the image. In addition, the location code (LC) of damaged regions is added to the network to guide network repair and accelerate the network convergence speed. Least squares GAN (LSGAN) loss is added to the generator’s adversarial network to capture more content details and enhance training stability. The PSNR is 33.93 and the SSIM is 0.931 in the CelebA and Paris datasets. This method has been proven effective when compared to other methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call