Abstract

Motion blur, which disturbs human and machine perceptions of a scene, has been considered an unnecessary artifact that should be removed. However, the blur can be a useful clue to understanding the dynamic scene, since various sources of motion generate different types of artifacts. Motivated by the relationship between motion and blur, we propose a motion-aware feature learning framework for dynamic scene deblurring through multi-task learning. Our multi-task framework simultaneously estimates a deblurred image and a motion field from a blurred image. We design the encoder-decoder architectures for two tasks, and the encoder part is shared between them. Our motion estimation network could effectively distinguish between different types of blur, which facilitates image deblurring. Understanding implicit motion information through image deblurring could improve the performance of motion estimation. In addition to sharing the network between two tasks, we propose a reblurring loss function to optimize the overall parameters in our multi-task architecture. We provide an intensive analysis of complementary tasks to show the effectiveness of our multi-task framework. Furthermore, the experimental results demonstrate that the proposed method outperforms the state-of-the-art deblurring methods with respect to both qualitative and quantitative evaluations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call