Non-uniform motion deblurring has been a challenging problem in the field of computer vision. Currently, deep learning-based deblurring methods have made promising achievements. In this paper, we propose a new joint strong edge and multi-stream adaptive fusion network to achieve non-uniform motion deblurring. The edge map and the blurred map are jointly used as network inputs and Edge Extraction Network (EEN) guides the Deblurring Network (DN) for image recovery and to complement the important edge information. The Multi-stream Adaptive Fusion Module (MAFM) adaptively fuses the edge information and features from the encoder and decoder to reduce feature redundancy to avoid image artifacts. Furthermore, the Dense Attention Feature Extraction Module (DAFEM) is designed to focus on the severely blurred regions of blurry images to obtain important recovery information. In addition, an edge loss function is added to measure the difference of edge features between the generated and clear images to further recover the edges of the deblurred images. Experiments show that our method outperforms currently public methods in terms of PSNR, SSIM and VIF, and generates images with less blur and sharper edges.
Read full abstract