Abstract

Photos and videos captured by handheld imaging devices are often accompanied by unwanted blur because of hand jitters and fast movement of objects during the exposure time. Most previous studies discussed single image deblurring and video deblurring but neglected detailed analyses of the spatiotemporal continuity between adjacent frames, which limits the deblurring effect. We propose a novel end-to-end blind video motion deblurring network that takes triple adjacent frames as input to deblur a blurry video frame. In our approach, a bidirectional temporal feature transfer between triple adjacent frames is implemented to pass the latent features of the central frame on to a group encoder of its neighbors. Then, a hybrid decoder decodes group features and estimates a sharper video frame relative to the central frame. Experimental results show that our model outperforms previous excellent methods in terms of traditional metrics (PSNR and SSIM) and visual quality within an acceptable time cost. The code is available at https://github.com/BITLIULONGEE/Triple-Adjacent-Frame-Generative-Network.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.