Abstract

AbstractThe human visual system unconsciously determines the color of the objects by “discounting” the effects of the illumination, whereas machine vision systems have difficulty performing this task. Color constancy algorithms assist computer vision pipelines by removing the effects of the illuminant, which in the end enables these pipelines to perform better on high-level vision tasks based on the color features of the scene. Due to its benefits, numerous color constancy algorithms have been developed, and existing techniques have been improved. Combining different strategies and investigating new methods might help us design simple yet effective algorithms. Thereupon, we present a color constancy algorithm based on the outcomes of our previous works. Our algorithm is built upon the biological findings that the human visual system might be discounting the illuminant based on the highest luminance patches and space-average color. We find the illuminant estimate based on the idea that if the world is gray on average, the deviation of the brightest pixels from the achromatic value should be caused by the illuminant. Our approach utilizes multi-scale operations by only considering the salient pixels. It relies on varying surface orientations by adopting a block-based approach. We show that our strategy outperforms learning-free algorithms and provides competitive results compared to the learning-based methods. Moreover, we demonstrate that using parts of our strategy can significantly improve the performance of several learning-free methods. We also briefly present an approach to transform our global color constancy method into a multi-illuminant color constancy approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call