Abstract

Artificial lights, which are powered by alternating current (AC), are ubiquitous nowadays. The intensity of these lights fluctuates dynamically depending on the AC power. In contrast to previous color constancy methods that exploited the spatial color information, we propose a novel deep learning-based color constancy method that exploits the temporal variations exhibited by AC-powered lights. Using a high-speed camera, we capture the intensity variations of AC lights. Then, we use these variations as an important cue for illuminant learning. We propose a network composed of spatial and temporal branches to train the model with both spatial and temporal features. The spatial branch learns the conventional spatial features from a single image, whereas the temporal branch learns the temporal features of AC-induced light intensity variations in a high-speed video. The proposed method calculates the temporal correlation between the high-speed frames to extract the effective temporal features. The calculations are done at a low computational cost and the output is fed into the temporal branch to help the model concentrate on illuminant-attentive regions. By learning both spatial and temporal features, the proposed method performs remarkably under a complex illuminant environment in a real world scenario in which color constancy is difficult to investigate. The experimental results demonstrate that the proposed method produces lower angular error than the previous state-of-the-art by 30%, and works exceptionally well under various illuminants, including complex ambient light environments.

Highlights

  • C OLOR constancy is a fundamental task in the fields of computer vision, computational photography and image processing [1], [2]

  • In contrast to previous color constancy methods that exploited the spatial color information, we propose a novel deep learning-based color constancy method that exploits the temporal variations exhibited by alternating current (AC)-powered lights

  • The spatial branch of the model aims to learn spatial features from a single image as is accomplished in typical color constancy deep networks. We observed that this branch helps in improving the performance of the proposed method in simple illuminant environments such as a laboratory, whereas the temporal branch is advantageous for complex indoor environments

Read more

Summary

Introduction

C OLOR constancy is a fundamental task in the fields of computer vision, computational photography and image processing [1], [2]. Its ultimate objective is to recover the intrinsic surface color by removing the effect of illuminant chromaticity [3]–[6]. In this regard, it is crucial to separate illuminant chromaticity from a digital image where surface and illuminant colors are mixed. The statistics-based approach exploits the assumption that the color distribution of image pixels is statistically achromatic [12]–[16]. This approach is widely used in commercial devices, it is not effective for narrow color distributions, such as regions with uniform color. The physics-based approach estimates illuminant chromaticity by applying the dichromatic model to a spatial image [17]–

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.