Image dehazing is a representative low-level vision task that estimates latent haze-free images from hazy images. In recent years, Vision Transformers (ViT) have shown promising dehazing performance, leveraging their capacity for global perception through long-sequence dependencies in cost of high resource consumption. Therefore, our approach seeks to integrate the utilization of global information into the Convolutional Neural Network (CNN) framework in a more resource-efficient manner. In this paper, we introduce the Adaptive Center-Surround Receptive Field (ACSRF) network architecture inspired by the central-peripheral receptive field in biological vision for single-image haze removal. This leads to a unique receptive field mechanism that effectively combines both central and surrounding information. Our ACRSF addresses this by initially compressing global information and then merging it within the CNN, significantly boosting the capability to integrate local and global information, and effectively handling dominant color tones. Experimental results on four publicly available real-world image dehazing datasets show that our ACRSF outperforms current state-of-the-art methods in recovering global information, especially in dominant color tones. Importantly, this technology demonstrates its effectiveness in realistic scenarios, contributing significantly to improving traffic safety in adverse weather conditions. The code is available at https://github.com/JavanTang/ACSRF.
Read full abstract