Under poor illumination, the image information captured by a camera is partially lost, which seriously affects the visual perception of the human. Inspired by the idea that the fusion of multiexposure images can yield one high-quality image, an adaptive enhancement framework for a single low-light image is proposed based on the strategy of virtual exposure. In this framework, the exposure control parameters are adaptively generated through a statistical analysis of the low-light image, and a virtual exposure enhancer constructed by a quadratic function is applied to generate several image frames from a single input image. Then, on the basis of generating weight maps by three factors, i.e., contrast, saturation and saliency, the image sequences and weight images are transformed by a Laplacian pyramid and Gaussian pyramid, respectively, and multiscale fusion is implemented layer by layer. Finally, the enhanced result is obtained by pyramid reconstruction rule. Compared with the experimental results of several state-of-the-art methods on five datasets, the proposed method shows its superiority on several image quality evaluation metrics. This method requires neither image calibration nor camera response function estimation and has a more flexible application range. It can weaken the possibility of overenhancement, effectively avoid the appearance of a halo in the enhancement results, and adaptively improve the visual information fidelity.
Read full abstract