Abstract

In the human and computer vision, color constancy is the ability to perceive the true color of objects in spite of changing illumination conditions. Color constancy is remarkably benefitting human and computer vision issues such as human tracking, object and human detection and scene understanding. Traditional color constancy approaches based on the gray world assumption fall short of performing a universal predictor, but recent color constancy methods have greatly progressed with the introduction of convolutional neural networks (CNNs). Yet, shallow CNN-based methods face learning capability limitations. Accordingly, this article proposes a novel color constancy method that uses a multi-stream deep neural network (MSDNN)-based convoluted mixture of deep experts (CMoDE) fusion technique in performing deep learning and estimating local illumination. In the proposed method, the CMoDE fusion technique is used to extract and learn spatial and spectral features in an image space. The proposed method distinctively piles up layers both in series and in parallel, selects and concatenates effective paths in the CMoDE-based DCNN, as opposed to previous works where residual networks stack multiple layers linearly and concatenate multiple paths. As a result, the proposed CMoDE-based DCNN brings significant progress towards efficiency of using computing resources, as well as accuracy of estimating illuminants. In the experiments, Shi's Reprocessed, gray-ball and NUS-8 Camera datasets are used to prove illumination and camera invariants. The experimental results establish that this new method surpasses its conventional counterparts.

Highlights

  • In the computer vision, the perceived color of objects is significantly impacted by the color of illumination in the scene [1]

  • For the past two decades, computer vision researchers have suggested a lot of color constancy adjustment techniques to cope with coloration and color cast in digital images

  • The proposed method adopts the convoluted mixture of deep experts (CMoDE) fusion technique to achieve estimation accuracy of the local illumination. This technique has a distinct merit of selecting and concatenating effective paths, which allows the network to go shallow. This is why the use of CMoDE fusion technique contributes to increasing the estimation accuracy of the proposed network and it is proven in the Experimental Results and Evaluations section

Read more

Summary

Introduction

The perceived color of objects is significantly impacted by the color of illumination in the scene [1]. Bianco et al [26] propose a CNN-based illuminant estimation method with the use of a histogram stretching technique. The biological color constancy method is intended to mimic and apply the functional characteristics of the human visual system (HVS) to perform the learning-based illuminant estimation, and several models have been presented [32]–[34].

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call