Abstract

Color constancy is the ability to measure colors of objects independent of the color of the light source. A well-known color constancy method makes use of the specular edge to estimate the illuminant. However, separating specular edge from input image is under-constrained and existing methods require user assistance or handle only simple scenes. This paper presents an iterative weighted Specular-Edge color constancy scheme that leverages large database of images gathered from the web. Given an input image, we execute an efficient visual search to find the closest visual context from the database and use the visual context as an image-specific prior. This prior is then used to correct the chromaticity of the input image before illumination estimation. Thus, introducing the prior can provide a good initial guess for the successive iteration. In the next, a specular-map guided filter is used to improve the precision of specular edge weighting. Consequently, it can increase the accuracy of estimating the illuminant. To the end, we evaluate our scheme on standard databases and the results show that the proposed scheme outperforms state of the art color constancy methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.