Abstract

Beyond the underlaying unrealistic presumptions in the existing video deblurring datasets and algorithms which presume that a naturally blurred video is fully blurred. In this work, we define a more realistic video frames averaging-based data degradation model by referring to a naturally blurred video as a partially blurred frames sequence, and use it to build REBVIDS, as a novel video deblurring dataset to close the gap between naturally blurred and synthetically blurred video training data, and to address most shortcomings of the existing datasets. We also present DeblurNet, a two phases training-based deep learning model for video deblurring, it consists of two main sub-modules; a Frame Selection Module and a Frame Deblurring Module. Compared to the recent learning-based approaches, its sub-modules have simpler network structures, with smaller number of training parameters, are easier to train and with faster inference. As naturally blurred videos are only partially blurred, the Frame Selection Module is in charge of selecting the blurred frames in a video sequence and forwarding them to the Frame Deblurring Module input, the Frame Deblurring Module in its turn will get them restored and recombine them according to the original order in a newly restored sequence beside their initially sharp neighbor frames. Extensive experimental results on several benchmarks demonstrate that DeblurNet performs favorably against the state-of-the-art, both quantitatively and qualitatively. DeblurNet proves its ability to trade between speed, computational cost and restoration quality. Besides its ability to restore video blurred frames with necessary edges and details, benefiting from its small size and its video frames selection integrated mechanism, it can speed up the inference phase by over ten times compared to existing approaches. This project dataset and code will be released soon and will be accessible through: https://github.com/nahliabdelwahed/Speed-up-video-deblurring-

Highlights

  • Video frames deblurring has long been an important problem in computer vision and image processing

  • This work comes with the following main contributions: 1) We introduce DeblurNet, as two stages training-based deep learning model for a fast and robust frame selective video deblurring

  • EXPERIMENETS AND FOUND RESULTS we describe DeblurNet conducted experiments, share their quantitative and qualitative results and compare them with other deep learning-based deblurring methods, we introduce our novel REBVIDS video deblurring dataset, as well as describe the other existing datasets that are studied in video deblurring literature

Read more

Summary

INTRODUCTION

Video frames deblurring has long been an important problem in computer vision and image processing. Nah et al [11] have achieved state-of-the-art results adopting a multi-scale Convolutional Neural Network Their method begins from a very coarse scale of a blurry image, and progressively recovers a clear image at higher resolutions until the full resolution is reached. Despite of the fact that some the early related learningbased video deblurring approaches have already a good enough performance in terms of restoration quality, but most of them still require long run time during the inference phase and a heavy computation cost. This work comes with the following main contributions: 1) We introduce DeblurNet, as two stages training-based deep learning model for a fast and robust frame selective video deblurring.

RELATED WORKS
FRAME DEBLURRING MODULE
EXPERIMENETS AND FOUND RESULTS
IMAGE QUALITY METRICS
MODEL TRAINING STRATEGY
35. Calculate the JointLoss value
Findings
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.