Abstract

Video deblurring is a challenging task since the blur is caused by camera shake, object motions, etc. The success of the state-of-the-art methods stems mainly from exploiting the temporal information of neighboring frames through alignment. When there exists occlusion among the sequence, these approaches become less effective for inaccurate alignment. In this paper, we propose an effective occlusion-aware network to handle the occlusion for video deblurring. The proposed module first generates a coarse pixel-wise alignment filter to explore the temporal information and then learns an adaptive affine transformation to deal with the occluded areas. In addition, a self-attention mechanism is developed to better model the occluded pixels. To further improve the performance, we progress a multi-scale strategy and train the network in an end-to-end manner. Both quantitative and qualitative experimental results show that the proposed method achieves favorable performance against state-of-the-art methods on the benchmark datasets. The code and trained models are available at: <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/XQLuck/code.git</uri>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call