Abstract

The existing learning-based dynamic scene deblurring methods have made good progress to some extent. However, these methods are usually based on multiscale strategy, which has the following shortcomings: (1) The bilinear downsampling operation will cause some loss of important high-frequency information, e.g., strong edges, which also further affects the network learning a better deblurring mapping. (2) Existing methods only use a single activation function, which limits the ability of the network model to fit data and causes the network performance to be easily saturated. Therefore, we propose an end-to-end progressive downsampling and adaptive guidance network called PDAG-Net for solving above problems. The proposed PDAG-Net can retain more strong edges and other high-frequency information of a blurry image so as to make the network learn a more effective deblurring mapping between the input and label images. In the proposed PDAG-Net, we implement a multiscale blended activation residual block called MSBA-ResBlock for learning the nonlinear characteristics of dynamic scene blur, which can also alleviate the performance saturation problem caused by a single activation function and improve multiscale feature extraction ability. Finally, we propose a multisupervision strategy for obtaining more robust and effective features and making the network possess more stable trainging and faster convergence. Extensive experimental results on a public dataset indicate that the proposed network outperforms the state-of-the-art image deblurring methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call