Image visibility is often degraded under challenging conditions such as low light, backlighting, and inadequate contrast. To mitigate these issues, techniques like histogram equalization, high dynamic range (HDR) tone mapping and near-infrared (NIR)–visible image fusion are widely employed. However, these methods have inherent drawbacks: histogram equalization frequently causes oversaturation and detail loss, while visible–NIR fusion requires complex and error-prone images. The proposed algorithm of a complementary cycle-consistent generative adversarial network (CycleGAN)-based training with visible and NIR images, leverages CycleGAN to generate fake NIR images by blending the characteristics of visible and NIR images. This approach presents tone compression and preserves fine details, effectively addressing the limitations of traditional methods. Experimental results demonstrate that the proposed method outperforms conventional algorithms, delivering superior quality and detail retention. This advancement holds substantial promise for applications where dependable image visibility is critical, such as autonomous driving and CCTV (Closed-Circuit Television) surveillance systems.
Read full abstract