Abstract

With the flourishing development of deep learning in the field of computer vision, research of defocus deblurring based on it has gradually become a hotspot. However, most of the research focuses on defocus region detection or defocus map estimation, and algorithms for directly generating restoration images are less studied. Stressing on the problems of defocus deblurring, we propose a defocus deblurring deep model based on multi-scale information and convolution neural network. Concretely, we first perform an efficient and concise multi-scale information fusion by the selective receptive field module, thus the model can adapt to the scale sensitivity of the image defocusing region. We then use the residual channel attention module in the bottleneck module to extract the correlation features between channels, which enhances the effective channels and suppress the useless ones. Finally, a fusion objective function of edge loss and mean square loss is proposed to enhance the edge details of the image. Experimental results on a large-scale defocus deblurring dual-pixel dataset demonstrate that the proposed model has better performance than the traditional and existing deep-based methods. Comparing with the methods of the state of the art, the proposed model has a 0.44-DB improvement in PSNR metric.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call