Abstract

Color constancy is one of the fundamental tasks in computer vision. Many supervised methods, including recently proposed Convolutional Neural Networks (CNN)-based methods, have been proved to work well on this problem, but they often require a sufficient number of labeled data. However, it is expensive and time-consuming to collect a large number of labeled training images with accurately measured illumination. In order to reduce the dependence on labeled images and leverage unlabeled ones without measured illumination, we propose a novel semi-supervised framework with limited training samples for illumination estimation. Our key insight is that the images with similar features from different cues will share similar lighting conditions. Consequently, three graphs based on three visual cues, low-level RGB color distribution, mid-level initial illuminant estimates and high-level scene content, are constructed to represent the relationship among different images. Then a multi-cue semi-supervised color constancy method (MSCC) is proposed after integrating these three graphs into a unified model. Extensive experiments on benchmark datasets demonstrate that our proposed MSCC method outperforms nearly all the existing supervised methods with limited labeled samples. Even with no unlabeled samples, MSCC still obtains better performance and stableness than most supervised methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.