Color constancy is an important part of the Human Visual System that allows us to recognize colors of object invariant to the light that is illuminating them. Computational color constancy is the process of estimating the illumination of a scene using some computational method. However, this problem is inherently ill-posed. The RGB value of each pixel is dependent on spectral reflectance of the object and the spectral power distribution of the illumination. Hence, methods that try to solve the computational color constancy problem have to introduce assumptions about the illumination. One common assumption is that there is only one global source of illumination, i.e., that the illumination is constant across the whole scene. Under this assumption, modern color constancy methods achieve excellent results, usually predicting the illumination color with accuracy better than the human eye. This assumption is broken in many real-world multi-illuminant scenes, e.g., outdoor images where parts of the scene are illuminated by either sunlight or skylight. This leads to significant drop in accuracy of single-illuminant estimation methods. Therefore, in this work, we propose a novel method for segmenting images based on illumination in multi-illuminant scenes. This method detects regions where there is only one illuminant, thus detecting areas where the single-illuminant assumption holds. We show that our method produces excellent results and outperforms all baseline models by a large margin.
Read full abstract