Abstract
Non-uniform motion blur caused by camera shake or object motion is a common artifact in videos captured by hand-held devices. Recent advances in video deblurring have shown that convolutional neural networks (CNNs) are able to aggregate information from multiple unaligned consecutive frames to generate sharper images. However, without explicit image alignment, most of the existing CNN-based methods often introduce temporal artifacts, especially when the input frames are severely blurred. To this end, we propose a novel video deblurring method to handle spatially varying blur in dynamic scenes. In particular, we introduce a motion estimation and motion compensation module which estimates the optical flow from the blurry images and then warps the previously deblurred frame to restore the current frame. Thus, the previous processing results benefit the restoration of the subsequent frames. This recurrent scheme is able to utilize contextual information efficiently and can facilitate the temporal coherence of the results. Furthermore, to suppress the negative effect of alignment error, we propose an adaptive information fusion module that can filter the temporal information adaptively. The experimental results obtained in this study confirm that the proposed method is both effective and efficient.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.