Abstract

Camera shake and target movement often leads to undesirable image blurring in videos. How to exploit spatial-temporal information of adjacent frames and reduce the processing time of deblurring are two major issues in video deblurring. In this paper, we propose a simple yet effective Fourier accumulation embedded 3D convolutional encoder-decoder network for video deblurring. Firstly, a 3D convolutional encoder-decoder module is constructed to extract multiscale spatial-temporal deep features and generate intermediate deblurred frames with complementary information which is beneficial for the deblurring of each frame. Then we embed a Fourier accumulation module following the 3D convolutional encoder-decoder, the Fourier accumulation module could fuse intermediate deblurred frames with learned weights in Fourier domain and then produce shaper deblurred frames. Experimental results show that our method has competitive performance compared with other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call