Abstract

Images captured in low-light conditions often suffer from bad visibility, e.g., low contrast, lost details, and color distortion, and image enhancement methods can be used to improve the image quality. Previous methods have generally obtained a smooth illumination map to enhance the image but have ignored details, leading to inaccurate illumination estimations. To solve this problem, we propose a multi-scale attention retinex network (MARN) for low-light image enhancement, which learns an image-to-illumination mapping to obtain a detailed inverse illumination map inspired by retinex theory. In order to introduce more image priors, we introduce a novel illuminance-attention map to guide the model to characterize varying-lighting areas, which we combine with the low-light image as the model input. MARN consists of a multi-scale attention module and a feature fusion module; the former extracts multi-resolution features with attention-based feature aggregation, while the latter further merges the output features of the previous module with the input. To achieve better visibility, we formulate a novel loss function to synthetically measure the illumination, detail, and colorfulness of the image. Extensive experiments are performed on several benchmark datasets. The results demonstrate that our method outperforms other state-of-the-art methods according to both objective and subjective metrics.

Highlights

  • Because of environmental or technical constraints, images are often captured under complicated lighting conditions

  • To predict a well-detailed illumination map for low-light image enhancement, in this paper, we propose a novel framework called multi-scale attention retinex network (MARN)

  • We propose a novel method, MARN, to obtain a detailed inverse illumination map for low-light image enhancement

Read more

Summary

INTRODUCTION

Because of environmental or technical constraints, images are often captured under complicated lighting conditions. Retinex-based methods usually enhance the illumination component of a low-light image to approximate the corresponding normal-light image. Some methods are unsupervised and apply the structure of generative adversarial networks (GANs) [18] or non-reference loss functions to estimate normal-light images from low-light images. To predict a well-detailed illumination map for low-light image enhancement, in this paper, we propose a novel framework called multi-scale attention retinex network (MARN). MARN is designed to predict a detailed inverse illumination map that contains the detail and color information. As opposed to multiple proposed CNN methods, we propose a novel multi-scale attention module for feature extraction, which can greatly improve the generalization capability of the network. We propose a novel method, MARN, to obtain a detailed inverse illumination map for low-light image enhancement. An ablation study is conducted to demonstrate the efficacy of our structure

RELATED WORK
THE ILLUMINANCE-ATTENTION MAP
EXPERIMENTAL SETTING
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call