Abstract
Video deblurring is a challenging task since the blur is caused by camera shake, object motions, etc. The success of the state-of-the-art methods stems mainly from exploiting the temporal information of neighboring frames through alignment. When there exists occlusion among the sequence, these approaches become less effective for inaccurate alignment. In this paper, we propose an effective occlusion-aware network to handle the occlusion for video deblurring. The proposed module first generates a coarse pixel-wise alignment filter to explore the temporal information and then learns an adaptive affine transformation to deal with the occluded areas. In addition, a self-attention mechanism is developed to better model the occluded pixels. To further improve the performance, we progress a multi-scale strategy and train the network in an end-to-end manner. Both quantitative and qualitative experimental results show that the proposed method achieves favorable performance against state-of-the-art methods on the benchmark datasets. The code and trained models are available at: <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/XQLuck/code.git</uri>
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.