Abstract

In this paper, we integrate image gradient priors into a generative adversarial networks (GANs) to deal with the dynamic scene deblurring task. Even though image deblurring has progressed significantly, the deep learning-based methods rarely take advantage of image gradients priors. Image gradient priors regularize the image recovery process and serve as a quantitative evaluation metric for evaluating the quality of deblurred images. In contrast to previous methods, the proposed model utilizes a data-driven way to learn image gradients. Under the guidance of image gradient priors, we permeate it throughout the design of network structures and target loss functions. For the network architecture, we develop a GradientNet to compute image gradients via horizontal and vertical directions in parallel rather than adopt traditional edge detection operators. For the loss functions, we propose target loss functions to constrain the network training. The proposed image deblurring strategy discards the tedious steps of solving optimization equations and taking further advantage of learning massive data features through deep learning. Extensive experiments on synthetic datasets and real-world images demonstrate that our model outperforms state-of-the-art (SOAT) methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call