Abstract

The current strive toward efficient intelligent visual systems suffers from challenges in the task of low-light image enhancement. To improve image perception, the low-light scenes in different illumination conditions must be properly focused. However, typical CNN-based methods use the same set of parameters for all images, which limits the capability for handling complex scenes. Meanwhile, the existing deep models integrate the low-level and high-level features by simply adding or concatenating operations, lacking unique designs for the low-light image enhancement task. To address the above challenges, we propose a zero-referenced adaptive filter network (ZAFN) for low-light image enhancement. Specifically, the adaptive filters are generated by the integration of high-level contents from multiple partial scenes. The iterative enlightening process is then conducted using the low-level features that are dynamically modulated with the adaptive filters. To alleviate the requirement of paired training data and enable zero-referenced learning, we propose a color enhancement loss, a global consistency loss, and a self-regularized denoising loss for high-quality results. Our ZAFN model, which has a light model size and low computational cost, outperforms other state-of-the-art zero-referenced methods on four popular datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.