Currently, most image dehazing algorithms overlook the local details of the image and fail to fully exploit different levels of features, resulting in color distortion, decreased contrast, and residual haze in the restored haze-free images. To tackle these issues, this paper proposes a method that dynamically enhances pixels by utilizing the spatial variation of image dark channel prior in images. For regions with high haze density where local information is insufficient, a Transformer is employed to learn the global dependencies of the input features. Conversely, for regions with low haze density where local information is effective, parallel multi-scale attention is used to extract local features. When enhancing each pixel, we dynamically determine the contribution of non-local and local information based on the image features. Furthermore, to better reflect the physical process of haze formation in the image and improve the interpretability of the feature space, a dual-branch physics-aware unit is established. It learns the features related to atmospheric scattering in the image and captures the visual characteristics. In the experiments, a large dataset of dehazing images is used for training and testing, and comparisons are made with other existing dehazing methods. The results demonstrate that this method, which takes into account both local and global information, significantly improves the dehazing performance of the model.
Read full abstract