Abstract

In recent years, the use of deep learning in image inpainting has yielded positive results. However, existing image inpainting algorithms do not pay sufficient attention to the structural and textural features of the image when inpainting, which leads to issues in the inpainting results such as blurring and distortion. To solve the above problems, a channel attention mechanism was introduced to emphasize the importance of structure and texture after extraction by the convolutional network. A bidirectional gated feature fusion module was employed to exchange and fuse the structural and textural features, ensuring the overall consistency of the image. In addition, the features of the image were better captured by selecting a deformable convolution that can adapt the receptive field to replace the ordinary convolution in the contextual feature aggregation module. This resulted in highly vivid and realistic restoration results with more reasonable details. The experiments showed that, compared with the current mainstream network, the repair results of this algorithm were more realistic, and the superiority of this algorithm was proved by qualitative and quantitative experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call