Abstract
Image deblurring is a challenging field in computational photography and computer vision. In the deep learning era, deblurring methods boosted with neural networks achieve significant results. However, the existing methods mainly focus on solving specific image deblurring problem, and overlook the origin of the motion blur. In this paper, we revisit how blur occurs, and divide them into three categories, i.e. caused by relative motion between camera and scene, caused by the movement of the object itself and the edges of a blurring image, which may meet discontinuity because of the pixels trajectory sampled outside the image. To address the issues of different blurs in an image, we propose a two-stage neural network for image deblurring named RAID-Net. In order to remove the global blurry region caused by camera movements, we first use a U-shape network to get the coarse deblurred image. Then we leverage an adaptive reasoning module to model the relationship between different blurry regions within one image jointly and remove the other two categories of motion blur. Experiments on two public benchmark datasets demonstrate that our method achieves comparable or better results over the state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.