Abstract

In general images captured under inevitable low light situations undergo low contrast and low visibility which affects in a bad manner on image analysis tasks. Improving the visibility of low-light images is a critical but difficult problem because it is typically significant but still complex with spatially varying characteristics. Pre-trained networks struggle with poor generalization capacity due to sample bias. Retinex theory based techniques are commonly used in image enhancement. Because many retinex-based algorithms ignore reflectance as enhancements and eliminate illumination, over-enhancement and unusualness are unavoidable. A self-supervised learning neural network for improving images with poor illumination is proposed to address these issues. The proposed convolutional neural network architecture have decomposition and enhancement modules. The decomposition module computes the illumination, followed by the enhancement module which produces retinex. Experimental results shows output images with enhanced contrast and visibility better than existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call