Abstract

This paper proposes that the task of single-image low-light enhancement can be accomplished by a straightforward method named Opt2Ada. It contains a series of pixel-level operations, including an optimized illuminance channel decomposition, an adaptive illumination enhancement, and an adaptive global scaling. Opt2Ada is traditional and it does not rely on architecture engineering, super-parameter tuning, or specific training dataset. Its parameters are generic and it has better generalization capability than existing data-driven methods. For evaluation, both the full-reference, non-reference, and semantic metrics are calculated. Extensive experiments on real-world low-light images demonstrate the superiority of Opt2Ada over recent traditional and deep learning algorithms. Due to its flexibility and effectiveness, Opt2Ada can be deployed as a pre-processing subroutine for high-level computer vision applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call