Abstract

Computational color constancy (CCC) is a fundamental prerequisite for many computer vision tasks. The key of CCC is to estimate illuminant color so that the image of a scene under varying illumination can be normalized to an image under the canonical illumination. As a type of solution, combination algorithms generally try to reach better illuminant estimation by weighting other unitary algorithms for a given image. However, due to the diversity of image features, applying the same weighting combination strategy to different images might result in unsound illuminant estimation. To address this problem, this study provides an effective option. A two-step strategy is first employed to cluster the training images, then for each cluster, ANFIS (adaptive neuro-network fuzzy inference system) models are effectively trained to map image features to illuminant color. While giving a test image, the fuzzy weights measuring what degrees the image belonging to each cluster are calculated, thus a reliable illuminant estimation will be obtained by weighting all ANFIS predictions. The proposed method allows illuminant estimation to be dynamic combinations of initial illumination estimates from some unitary algorithms, relying on the powerful learning and reasoning capabilities of ANFIS. Extensive experiments on typical benchmark datasets demonstrate the effectiveness of the proposed approach. In addition, although there is an initial observation that some learning-based methods outperform even the most carefully designed and tested combinations of statistical and fuzzy inference systems, the proposed method is good practice for illuminant estimation considering fuzzy inference eases to implement in imaging signal processors with if-then rules and low computation efforts.

Highlights

  • The human vision system has the instinctive ability to perceive true color even under some specific imaging conditions and scene illumination

  • We show the ground truth, our estimated illuminant color and resulting white-balanced image, and other estimated illuminant colors and resulting white-balanced images using the unitary algorithms (GW, General gray world (GGW), White patch (WP), GE1, GE2, SOG, Primary Component Analysis (PCA)-based, and local surface reflection (LSR))

  • The proposed method uses the combination of eight unitary algorithms (GW, WP, Shades of gray (SoG), GE1, GE2, GGW, PCA-based, and LSR), as we found for any image in our training dataset there are always some of these algorithms providing better illuminant estimation results

Read more

Summary

Introduction

The human vision system has the instinctive ability to perceive true color even under some specific imaging conditions and scene illumination. This “color constancy” capability for computer systems is becoming more and more necessary due to a wide range of computer vision applications [1,2]. Without specific algorithms, computers and imaging sensors in modern digital cameras do not innately possess this capability To address this issue, a variety of computational color constancy (CCC) algorithms have been proposed, aiming to endow computers with super power to compensate for the effect of the illumination on objects’ color perception. The first step, i.e., illuminant estimation, is the key

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call