Abstract

Current low-light image enhancement methods have made great progress on improving the visibility of low-light images. Nevertheless, they pay less attention to preserving visual naturalness and therefore often introduce over-enhancement and local artifacts into their results. To address this issue, it is useful to introduce additional multi-view information of an image into enhancement models, such as illumination distribution. In this context, we propose a simple but effective loss term that expects the originally bright regions in input images and their corresponding enhanced images to be as similar as possible. Via fully exploring the illumination distribution of an image, the loss term makes enhancement models to know which regions should be preserved during training. Therefore, the unnatural effects in output images can be effectively relieved. In our experiments, we incorporate our loss term into several recently proposed low-light image enhancement models. The experimental results on multiple datasets show that over-enhancement and local artifacts can be effectively suppressed by using our loss term.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.