Abstract

This paper presents an image enhancement model, D2BGAN (Dark to Bright Generative Adversarial Network), to translate low light images to bright images without a paired supervision. We introduce the use of geometric and lighting consistency along with a contextual loss criterion. These when combined with multiscale color, texture and edge discriminators prove to provide competitive results. We performed extensive experiments using benchmark datasets to visually and objectively compare our results. We observed the performance of D2BGAN on real-time driving datasets that are subject to motion blur, noise, and other artifacts. We further demonstrated that our enhanced images can be profitably used in image-understanding tasks. Images processed using our technique obtain the best or second best average scores for three different image quality evaluation methods on the Naturalness Preserved Enhancement (NPE), Low Light Image Enhancement (LIME), Multi-Exposure Image Fusion (MEF) benchmark datasets. Best scores are also obtained on the LOw-Light (LOL) test set and on Berkeley Driving Dataset (BDD) images processed with D2BGAN. Face detection tasks on the DarkFace benchmark dataset show an mAP (mean Average Precision) improvement from 0.209 to 0.301 when images are processed using D2BGAN. mAP further improves to 0.525 when finetuning techniques are adopted.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call