Abstract

Images captured with different devices in uneven conditions (e.g., invariable lighting, low lighting, weather changes, exposure time, etc.) often lead to low image visibility and poor color and contrast, affecting the performance of computer vision and pattern recognition applications. The pre-trained convolutional neural networks (CNNs) solely rely on the training data and lack adaptation due to the uncertainty in the lighting conditions. Moreover, capturing large-scale datasets to train CNNs also raises the computational complexity and overall cost. This work integrates the knowledge and data and proposes a two-stage Uneven-to-Enliven network (U2E-Net), which rapidly learns to see in uneven conditions. A multiple-layered Uneven network learns to distinguish the reflection and illumination in the input images, and an encoder–decoder-based Enliven-Net contextualizes the illumination information. A key component in such ill-posed problems is to obtain information from priors and pairs; however, we present the compelling idea of information trade-off followed by decomposition consistency, thereby progressively improving the visual quality with the subsequent enhancement operations. To this end, we proposed a two-faceted framework that can work without depending on the data type. A novel color and contrast preservation strategy (CPS) is proposed following the decomposition of input data. CPS is integrated within the network to extract contrast in the darkest background regions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call