Abstract

Blurring is one of the main degradation factors in image degradation, so image deblurring is of great interest as a fundamental problem in low-level computer vision. Because of the limited receptive field, traditional CNNs lack global fuzzy region modeling, and do not make full use of rich context information between features. Recently, a transformer-based neural network structure has performed well in natural language tasks, inspiring rapid development in the field of defuzzification. Therefore, in this paper, a hybrid architecture based on CNN and transformers is used for image deblurring. Specifically, we first extract the shallow features of the blurred images using a cross-layer feature fusion block that emphasizes the contextual information of each feature extraction layer. Secondly, an efficient transformer module for extracting deep features is designed, which fully aggregates feature information at medium and long distances using vertical and horizontal intra- and inter-strip attention layers, and a dual gating mechanism is used as a feedforward neural network, which effectively reduces redundant features. Finally, the cross-layer feature fusion block is used to complement the feature information to obtain the deblurred image. Extensive experimental results on publicly available benchmark datasets GoPro, HIDE, and the real dataset RealBlur show that the proposed method outperforms the current mainstream deblurring algorithms and recovers the edge contours and texture details of the images more clearly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call