Abstract
A vast amount of pictures are taken every day by using cameras mounted on various mobile devices. Even though the clarity of such acquired images has been significantly improved due to the advance of the image sensor technology, the visual quality is hardly guaranteed under varying illumination conditions. In this paper, a novel yet simple method for low-light image enhancement is proposed via the maximal diffusion value. The key idea of the proposed method is to estimate the illumination component, which is likely to appear as the bright pixel even under the low-light condition, by exploring multiple diffusion spaces. Specifically, the illumination component can be accurately separated from the scene reflectance by selecting the maximal value at each pixel position of those diffusion spaces, and thus independently adjusted for the visual quality enhancement. That is, we propose to adopt the maximal value among diffused intensities at each pixel position, so-called maximal diffusion value, as the illumination component since illumination components buried in the dark tend to be revealed with bright intensities through the iterative diffusion process. In contrast to previous approaches that still pose difficulties to balance between over-saturated and conservative restorations, the proposed method improves the image quality without any significant distortion while successfully suppressing the problem of noise amplification. Experimental results on benchmark datasets show the efficiency and robustness of the proposed method compared to previous approaches introduced in literature.
Highlights
The low-light condition in everyday photos often occurs due to various environmental factors, e.g., night time, uneven illumination, and structured shadow
We propose to adopt the maximal value at each pixel position of multiple diffusion spaces as the illumination component
EXPERIMENTAL RESULTS various experimental results are demonstrated based on two benchmark datasets, i.e., NASA [34] and HDR [35] datasets, which have been most widely employed for the performance evaluation of low-light image enhancement
Summary
The low-light condition in everyday photos often occurs due to various environmental factors, e.g., night time, uneven illumination, and structured shadow. This leads to loss of details and surface changes of underlying structures in a given scene, which significantly deteriorate the image quality and degrade the viewing experience. Such distorted inputs make a dramatic performance drop in many algorithms of computer vision, e.g., object detection [1] and recognition [2], stereo matching [3], etc.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.