Abstract

Image deblurring is a representative low-level vision task that aims to estimate latent sharp images from blurred images. Recently, convolutional neural network (CNN)-based methods have dominated image deblurring. However, traditional CNN-based deblurring methods suffer from two essential issues: first, existing multiscale deblurring methods process blurred images at different scales through sub-networks with the same composition, which limits the model performance. Second, the convolutional layers fail to adapt to the input content and cannot effectively capture long-range dependencies. To alleviate the above issues, we rethink the multiscale architecture that follows a coarse-to-fine strategy and propose a novel hybrid architecture that combines CNN and transformer (CTMS). CTMS has three distinct features. First, the finer-scale sub-networks in CTMS are designed as architectures with larger receptive fields to obtain the pixel values around the blur, which can be used to efficiently handle large-area blur. Then, we propose a feature modulation network to alleviate the disadvantages of CNN sub-networks that lack input content adaptation. Finally, we design an efficient transformer block, which significantly reduces the computational burden and requires no pre-training. Our proposed deblurring model is extensively evaluated on several benchmark datasets, and achieves superior performance compared to state-of-the-art deblurring methods. Especially, the peak signal to noise ratio (PSNR) and structural similarity (SSIM) values are 32.73 dB and 0.959, respectively, on the popular dataset GoPro. In addition, we conduct joint evaluation experiments on the proposed method deblurring performance, object detection, and image segmentation to demonstrate the effectiveness of CTMS for subsequent high-level computer vision tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call