Abstract
With the unavailability of scene depth information, single-sensor dehazing methods based on deep learning or prior information do not effectively work in dense foggy scenes. An effective approach is to remove the dense fog by fusing visible and near-infrared images. However, the current dehazing algorithms based on near-infrared and visible images experience color distortion and information loss. To overcome these challenges, we proposed a color-preserving dehazing method that fuses near-infrared and visible images by introducing a dataset (VN-Haze) of visible and near-infrared images captured under hazy conditions. A two-stage image enhancement (TSE) method that can effectively rectify the color of visible images affected by fog was proposed to prevent the introduction of distorted color information. Furthermore, we proposed an adaptive luminance mapping (ALM) method to prevent color bias in fusion images caused by excessive differences in brightness between visible and near-infrared images that occur in vegetation areas. The proposed visible-priority fusion strategy reasonably allocates weights for visible and near-infrared images, minimizing the loss of important features in visible images. Compared with existing dehazing algorithms, the proposed algorithm generates images with natural colors and less distortion and retains important visible information. Moreover, it demonstrates remarkable performance in objective evaluations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.