Abstract
Currently, most image dehazing algorithms overlook the local details of the image and fail to fully exploit different levels of features, resulting in color distortion, decreased contrast, and residual haze in the restored haze-free images. To tackle these issues, this paper proposes a method that dynamically enhances pixels by utilizing the spatial variation of image dark channel prior in images. For regions with high haze density where local information is insufficient, a Transformer is employed to learn the global dependencies of the input features. Conversely, for regions with low haze density where local information is effective, parallel multi-scale attention is used to extract local features. When enhancing each pixel, we dynamically determine the contribution of non-local and local information based on the image features. Furthermore, to better reflect the physical process of haze formation in the image and improve the interpretability of the feature space, a dual-branch physics-aware unit is established. It learns the features related to atmospheric scattering in the image and captures the visual characteristics. In the experiments, a large dataset of dehazing images is used for training and testing, and comparisons are made with other existing dehazing methods. The results demonstrate that this method, which takes into account both local and global information, significantly improves the dehazing performance of the model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.