Abstract

In the image processing pipelines of digital cameras, one of the first steps is to achieve invariance in terms of scene illumination, namely computational color constancy. Usually, this is done in two successive steps which are illumination estimation and chromatic adaptation. The illumination estimation aims at estimating a three-dimensional vector from image pixels. This vector represents the scene illumination, and it is used in the chromatic adaptation step, which aims at eliminating the bias in image colors caused by the color of the illumination. An accurate illumination estimation is crucial for successful computational color constancy. However, this is an ill-posed problem, and many methods try to comprehend it with different assumptions. In this paper, an iterative method for estimating the scene illumination color is proposed. The method calculates the illumination vector by a series of intermediate illumination estimations and chromatic adaptations of an input image using a convolutional neural network. The network has been trained to iteratively compute intermediate incremental illumination estimates from the original image. Incremental illumination estimates are combined by per element multiplication to obtain the final illumination estimation. The approach is aimed to reduce large estimation errors usually occurring with highly saturated light sources. Experimental results show that the proposed method outperforms the vast majority of illumination estimation methods in terms of median angular error. Moreover, in terms of worst-performing samples, i.e., the samples for which a method errs the most, the proposed method outperforms all other methods by a margin of more than 18% with respect to the mean of estimation errors in the third quartile.

Highlights

  • In digital photography, any illumination present in the scene of interest significantly impacts the colors of the objects in digital images

  • Intermediate illumination vectors estimated in the iterations are element-wise multiplied to produce the final illumination vector that corresponds to the scene illumination captured in the original raw input image

  • EXPERIMENTAL SETUP Cube+ dataset [45] was used to train and test the proposed illumination estimation network and the iterative procedure. It is a dataset containing 1707 images labeled for global illumination estimation

Read more

Summary

Introduction

Any illumination present in the scene of interest significantly impacts the colors of the objects in digital images. If the same scene is captured with the same camera (i.e., the reflectance of the object surface and the spectral sensitivity of the camera sensor are constant) whereas the spectrum of the light source changes, the colors in the captured images will most likely differ. For most digital cameras, one of the first steps in the image processing pipeline is dedicated to achieving illumination invariance This process can be associated with the ability of the human visual system to adapt to changes in scene illumination, namely color constancy [2].

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call