Abstract
The objective of this paper is to argue that data mattered more than network in terms of color constancy. Computational color constancy is a linear operation device-dependent, which is part of the camera imaging pipeline. We extend the dataset based on this pipeline and prove that the scene illumination can be predicted using a very simple network as long as the dataset is large enough and evenly distributed. In the process of expanding the dataset, firstly, we remove illumination color casts in images which is ground-truth illumination color and then casts randomly generated evenly distributed illumination colors in images. We randomly generate five labels for each image and then work on the image to obtain this dataset. Using this dataset, we introduce a very simple network that is able to compute the color mapping function to correct the image’s colors. Experiments on our new datasets demonstrate that the method of this paper significantly outperforms the state-of-the-art color constancy methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.