Abstract

Various types of photographing equipment will experience image degradation when capturing the motion of the target object, especially the fast and complex nonlinear motion. The motion blur of an image is a representative image degradation phenomenon. Although the conventional deblurring method is slightly effective in solving the problem of image deblurring, it is incapable of performing precise kernel estimate. To this end, it is necessary to summarize those methods in a unified framework. In this paper, non-uniform blind deblurring in dynamic scenes is divided into single-image deblurring and event-based deblurring. Based on the most effective deep learning image deblurring algorithms available today, it is analyzed and summarized. In terms of single-image deblurring, a multiscale deep learning method and an improved coarse-to-fine multi-scale network U-Net (MIMO-UNet) is introduced; in terms of event-based deblurring, a novel method is summarized and analyzed. End-to-end trainable recurrent architecture and Event-based deblurring model architecture in dynamic real scenes (RED-Net). In addition to this, a brand new deblurring dataset named REDS is also summarized. These deblurring methods proposed in recent years have promoted the development of computer vision. We find that compared to single image deblurring, using event camera deblurring can adapt to high dynamic and high-speed environmental conditions, can quickly find out the changes between potential images, and significantly improve the performance of deblurring dynamic scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call