Abstract

Image motion deblurring methods based on deep learning have achieved promising performance. However, these methods ignore the global dependence of structural features, which leads to the problem of incomplete structure or the introduction of artifacts in deblurred images. To address this issue, we propose an image motion deblurring method based on attention mechanism and generative adversarial network. Firstly, a feature extraction strategy combining residual module and cascaded criss-cross attention module is proposed, which can collect abundant feature information along with contextual relationships among pixels. Secondly, a local–global dual-scale discriminator is adopted to supervise the generator to generate complete local details and global contours with a larger receptive field. Thirdly, a multi-component loss function is designed to guide the network to focus more on the relevant edge features rather than the remote interference points, thus improving the realism of deblurred images in terms of color and textures. Finally, the results of quantitative and qualitative experiments on deblurring benchmark datasets demonstrate that our method performs favorably against the state-of-the-art deep image deblurring methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call