Abstract

Computational color constancy algorithms are commonly evaluated only through angular error analysis on annotated datasets of static images. The widespread use of videos in consumer devices motivated us to define a richer methodology for color constancy evaluation. To this extent, temporal and spatial stability are defined here to determine the degree of sensitivity of color constancy algorithms to variations in the scene that do not depend on the illuminant source, such as moving subjects or a moving camera. Our evaluation methodology is applied to compare several color constancy algorithms on stable sequences belonging to the Gray Ball and Burst Color Constancy video datasets. The stable sequences, identified using a general-purpose procedure, are made available for public download to encourage future research. Our investigation proves the importance of evaluating color constancy algorithms according to multiple metrics, instead of angular error alone. For example, the popular fully convolutional color constancy with confidence-weighted pooling algorithm is consistently the best performing solution for error evaluation, but it is often surpassed in terms of stability by the traditional gray edge algorithm, and by the more recent sensor-independent illumination estimation algorithm.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.