Current image dehazing algorithms based on deep learning usually use traditional convolutional layers when extracting features, which easily causes the loss of image details and edge information, ignores the position information of the image when extracting features, and ignores the original information of the image when fusing features, and cannot restore high-quality haze-free images with complete structure and clarity. To address this problem, a dehazing algorithm based on residual context attention and cross-layer feature fusion is proposed. Firstly, the residual group structure is obtained by serializing the proposed residual context blocks, and the features of the first two layers of the network, i.e., the shallow layers, are extracted to obtain rich context information of the shallow layers; secondly, coordinate attention is introduced to establish an attention map with position information, and it is applied to the residual context feature extraction and placed in the third layer of the network, i.e., the deep layer, to extract deeper semantic information; then, in the middle layer of the network, feature information from different resolution streams is fused across layers to enhance the information exchange between the shallow and deep layers to achieve the purpose of feature enhancement; finally, the features with rich semantic information obtained by the network are aggregated with the original input, thereby improving the restoration effect. Experimental results on the RESIDE dataset and the Haze4K dataset show that the proposed algorithm achieves good results in both visual effects and objective indicators.
Read full abstract