Abstract

Images in dark scenes are affected not only by low-light conditions but also by mixed exposures, posing great challenges to human and machine vision. Most of the current methods predominantly focus on ameliorating low-exposure images. However, for mixed-exposure low-light images, the high-exposure areas of the images will be over-enhanced, resulting in losing some detail information and not getting a well visual effect. To address this problem, we design a new unsupervised low-light enhancement method, called LEES-Net, with good robustness for mixed exposure cases. LEES-Net can transform the problem of low-light enhancement and exposure suppression into a curve estimation problem, thereby effectively reducing the complexity associated with image enhancement. By adding the attention mechanisms, the low-exposure and high-exposure areas in the images can be targeted and dynamically adjusted, which makes the enhanced images have well visual effects. Through extensive experiments, our method outperforms other state-of-the-art unsupervised methods with good generalization ability, robustness and visual effects. Furthermore, we propose a more lightweight network with merely 4.464 K parameters and 0.002 s inference speed, called LEES-Net+, which can keep the enhanced performance of LEES-Net with smaller computational cost and parameters, and is more suitable for deployment on devices with limited resources and real-time requirements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call