Abstract

Color constancy is one of the key steps in the process of image formation in digital cameras. Its goal is to process the image so that there is no influence of illumination color on the colors of objects and surfaces. To capture the target scene colors as accurately as possible, it is crucial to estimate the illumination vector with high accuracy. Unfortunately, the illumination estimation is an ill-posed problem, and solving it most often relies on assumptions. To date, various assumptions have been proposed, which resulted in a wide variety of illumination estimation methods. Statistics-based methods have shown to be appropriate for hardware implementation, but learning-based methods achieve state-of-the-art results, especially those that use deep neural networks. The large learning capacities and generalization abilities of deep neural networks can be used to develop the illumination estimation methods, which are more general and precise. This approach avoids introducing many new assumptions, which often only work in some specific situations. In this paper, a new method for illumination estimation based on light source classification is proposed. In the first step, the set of possible illuminations is reduced by classifying the input image in one of three classes. The classes include images captured in outdoor scenes under natural illuminations, images captured in outdoor scenes under artificial illuminations, and images captured in indoor scenes under artificial illuminations. In the second step, a deep illumination estimation network, which is trained exclusively on images in the class that was predicted in the first step, is applied to the input image. Dividing the illumination space into smaller regions makes the training of illumination estimation networks simpler because the distribution of image scenes and illuminations is less diverse. The experiments on the Cube+ image dataset have shown the median illumination estimation error of 1.27°, which is an improvement of more than 25% compared to the use of the single network for all illuminations.

Highlights

  • One of the first steps in the image formation pipeline of contemporary digital cameras is computational color constancy

  • The authors have shown that classification-based methods improve the illumination estimation, especially when indoor-outdoor classification with the addition of uncertainty class is used to determine which illumination estimation method to apply for the input image

  • In contrast to [3] and this paper, which are classifying the input image based on its features to reduce the illumination space before the illumination estimation step, in [4], the opposite was proposed, i.e., the illumination estimation has been used for indoor-outdoor image classification

Read more

Summary

INTRODUCTION

One of the first steps in the image formation pipeline of contemporary digital cameras is computational color constancy. From the image formation pipeline, it can be seen that colors in the image are a combination of three physical values These are the spectral distribution of the light source, spectral reflectance properties of surfaces in the image scene, and the sensitivity of the camera sensor. The major drawback of illumination estimation is that it is an ill-posed problem Because most often both I (λ) and ρ(λ) are not known, and only image pixel values f are known, there is an infinite number of possible illumination and surface reflectance combinations for a given image f. To overcome this issue, different assumptions for the illumination estimation have been proposed, yielding a wide variety of illumination estimation methods.

RELATED WORK
THE PROPOSED METHOD
PERFORMANCE METRICS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.