Abstract

Multi-illuminant-based color constancy (MCC) is quite a challenging task. In this paper, we proposed a novel model motivated by the bottom-up and top-down mechanisms of human visual system (HVS) to estimate the spatially varying illumination in a scene. The motivation for bottom-up based estimation is from our finding that the bright and dark parts in a scene play different roles in encoding illuminants. However, handling the color shift of large colorful objects is difficult using pure bottom-up processing. Thus, we further introduce a top-down constraint inspired by the findings in visual psychophysics, in which high-level information (e.g., the prior of light source colors) plays a key role in visual color constancy. In order to implement the top-down hypothesis, we simply learn a color mapping between the illuminant distribution estimated by bottom-up processing and the ground truth maps provided by the dataset. We evaluated our model on four datasets and the results show that our method obtains very competitive performance compared with the state-of-the-art MCC algorithms. Moreover, the robustness of our model is more tangible considering that our results were obtained using the same parameters for all the datasets or the parameters of our model were learned from the inputs, that is, mimicking how HVS operates. We also show the color correction results on some real-world images taken from the web.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.