Color Constancy From a Pure Color View: An Edge‐Aware Algorithm for a Wider Application

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACT The development of imaging technologies allows personal electronic devices to have telephoto and close‐up cameras. The images captured by these cameras sometimes are dominated by a single color (i.e., pure color images). In our earlier work, the PolyU Pure Color dataset was collected and the Pure Color Constancy (PCC) method was proposed, which was the first study investigating color constancy for pure color images. In order to make the method more robust, especially when the images are not as extreme as those included in the PolyU Pure Color dataset, an edge‐aware Pure Color Constancy (ePCC) method is proposed in this article. It adopts a similar architecture as the PCC method, with four additional color features derived from the edge map of an image as the input. Moreover, the PolyU Pure Color dataset V2 was collected. It includes 1271 images, which are not as extreme as those in the PolyU Pure Color dataset V1 and cover a wider range of illuminant colors. The proposed ePCC method was found to have better performance than the PCC method on pure color images, reducing the angular error by 10% with slight increases in the number of parameters and computational resources. Moreover, the ePCC method also resulted in comparable performance to the various state‐of‐the‐art learning‐based methods on images of normal scenes.

Similar Papers
  • Research Article
  • Cite Count Icon 7
  • 10.1364/josaa.482698
Color constancy from a pure color view.
  • Feb 27, 2023
  • Journal of the Optical Society of America A
  • Shuwei Yue + 1 more

Great efforts have been made on illuminant estimation in both academia and industry, leading to the development of various statistical- and learning-based methods. Little attention, however, has been given to images that are dominated by a single color (i.e., pure color images), though they are not trivial to smartphone cameras. In this study, a pure color image dataset, "PolyU Pure Color," was developed. A lightweight feature-based multilayer perceptron (MLP) neural network model-"Pure Color Constancy (PCC)"-was also developed for estimating the illuminant of pure color images using four color features (i.e., the chromaticities of the maximal, mean, brightest, and darkest pixels) of an image. The proposed PCC method was found to have significantly better performance for pure color images in the PolyU Pure Color dataset and comparable performance for normal images in two existing image datasets, in comparison to the various state-of-the-art learning-based methods, with a good cross-sensor performance. Such good performance was achieved with a much smaller number of parameters (i.e., around 400) and a very short processing time (i.e., around 0.25 ms) for an image using an unoptimized Python package. This makes the proposed method possible for practical deployments.

  • Research Article
  • 10.2352/cic.2022.30.1.35
Dive into Illuminant Estimation from a Pure Color View
  • Nov 15, 2022
  • Color and Imaging Conference
  • Shuwei Yue + 1 more

Illuminant estimation is critically important in computational color constancy, which has attracted great attentions and motivated the development of various statistical- and learning-based methods. Past studies, however, seldom investigated the performance of the methods on pure color images (i.e., an image that is dominated by a single pure color), which are actually very common in daily life. In this paper, we develop a lightweight feature-based Deep Neural Network (DNN) model—Pure Color Constancy (PCC). The model uses four color features (i.e., chromaticity of the maximal, mean, the brightest, and darkest pixels) as the inputs and only contains less than 0.5k parameters. It only takes 0.25ms for processing an image and has good cross-sensor performance. The angular errors on three standard datasets are generally comparable to the state-of-the-art methods. More importantly, the model results in significantly smaller angular errors on the pure color images in PolyU Pure Color dataset, which was recently collected by us.

  • Research Article
  • Cite Count Icon 12
  • 10.1109/tip.2014.2336545
Color constancy using 3D scene geometry derived from a single image.
  • Jul 16, 2014
  • IEEE Transactions on Image Processing
  • Noha Elfiky + 3 more

The aim of color constancy is to remove the effect of the color of the light source. As color constancy is inherently an ill-posed problem, most of the existing color constancy algorithms are based on specific imaging assumptions (e.g., gray-world and white patch assumption). In this paper, 3D geometry models are used to determine which color constancy method to use for the different geometrical regions (depth/layer) found in images. The aim is to classify images into stages (rough 3D geometry models). According to stage models, images are divided into stage regions using hard and soft segmentation. After that, the best color constancy methods are selected for each geometry depth. To this end, we propose a method to combine color constancy algorithms by investigating the relation between depth, local image statistics, and color constancy. Image statistics are then exploited per depth to select the proper color constancy method. Our approach opens the possibility to estimate multiple illuminations by distinguishing nearby light source from distant illuminations. Experiments on state-of-the-art data sets show that the proposed algorithm outperforms state-of-the-art single color constancy algorithms with an improvement of almost 50% of median angular error. When using a perfect classifier (i.e, all of the test images are correctly classified into stages); the performance of the proposed method achieves an improvement of 52% of the median angular error compared with the best-performing single color constancy algorithm.

  • Research Article
  • Cite Count Icon 843
  • 10.1109/tip.2007.901808
Edge-Based Color Constancy
  • Sep 1, 2007
  • IEEE Transactions on Image Processing
  • J Van De Weijer + 2 more

Color constancy is the ability to measure colors of objects independent of the color of the light source. A well-known color constancy method is based on the gray-world assumption which assumes that the average reflectance of surfaces in the world is achromatic. In this paper, we propose a new hypothesis for color constancy namely the gray-edge hypothesis, which assumes that the average edge difference in a scene is achromatic. Based on this hypothesis, we propose an algorithm for color constancy. Contrary to existing color constancy algorithms, which are computed from the zero-order structure of images, our method is based on the derivative structure of images. Furthermore, we propose a framework which unifies a variety of known (gray-world, max-RGB, Minkowski norm) and the newly proposed gray-edge and higher order gray-edge algorithms. The quality of the various instantiations of the framework is tested and compared to the state-of-the-art color constancy methods on two large data sets of images recording objects under a large number of different light sources. The experiments show that the proposed color constancy algorithms obtain comparable results as the state-of-the-art color constancy methods with the merit of being computationally more efficient.

  • Research Article
  • Cite Count Icon 275
  • 10.1109/tpami.2010.93
Color Constancy Using Natural Image Statistics and Scene Semantics
  • Apr 1, 2011
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • A Gijsenij + 1 more

Existing color constancy methods are all based on specific assumptions such as the spatial and spectral characteristics of images. As a consequence, no algorithm can be considered as universal. However, with the large variety of available methods, the question is how to select the method that performs best for a specific image. To achieve selection and combining of color constancy algorithms, in this paper natural image statistics are used to identify the most important characteristics of color images. Then, based on these image characteristics, the proper color constancy algorithm (or best combination of algorithms) is selected for a specific image. To capture the image characteristics, the Weibull parameterization (e.g., grain size and contrast) is used. It is shown that the Weibull parameterization is related to the image attributes to which the used color constancy methods are sensitive. An MoG-classifier is used to learn the correlation and weighting between the Weibull-parameters and the image attributes (number of edges, amount of texture, and SNR). The output of the classifier is the selection of the best performing color constancy method for a certain image. Experimental results show a large improvement over state-of-the-art single algorithms. On a data set consisting of more than 11,000 images, an increase in color constancy performance up to 20 percent (median angular error) can be obtained compared to the best-performing single algorithm. Further, it is shown that for certain scene categories, one specific color constancy algorithm can be used instead of the classifier considering several algorithms.

  • Conference Article
  • Cite Count Icon 2
  • 10.1117/12.912073
Computational color constancy using chromagenic filters in color filter arrays
  • Feb 9, 2012
  • Raju Shrestha + 1 more

We have proposed, in this paper, a new color constancy technique, an extension to the chromagenic color constancy. Chromagenic based illuminant estimation methods take two shots of a scene, one without and one with a specially chosen color filter in front of the camera lens. Here, we introduce chromagenic filters into the color filter array itself by placing them on top of R, G or B filters and replacing one of the two green filters in the Bayer's pattern with them. This allows obtaining two images of the same scene via demosaicking: a normal RGB image, and a chromagenic image, equivalent of RGB image with a chromagenic filter. The illuminant can then be estimated using chromagenic based illumination estimation algorithms. The method, we named as CFA based chromagenic color constancy (or 4C in short), therefore, does not require two shots and no registration issues involved unlike as in the other chromagenic based color constancy algorithms, making it more practical and useful computational color constancy method in many applications. Experiments show that the proposed color filter array based chromagenic color constancy method produces comparable results with the chromagenic color constancy without interpolation.

  • Research Article
  • Cite Count Icon 7
  • 10.1007/s10043-011-0054-7
A color constancy method using fuzzy measures and integrals
  • May 1, 2011
  • Optical Review
  • Tara Akhavan + 1 more

The ability of measuring colors of objects, independent of light source illumination, is called color constancy which is an important problem in machine vision and image processing fields. In this paper, we propose a new combinational method that is based on fuzzy measures and integrals to estimate the chromaticity of the light source as the major step of color constancy. The basic idea of the proposed method is that there are color constancy methods with some similarities in their structure and the way they are applied. The proposed method works with the help of assigning fuzzy measures to these methods and their combinations and computing the Choquet fuzzy integral. To approve the proposed method, we selected four well known algorithms and their results were combined by the proposed approach. In selecting these methods, it was tried to choose the ones which had better performance in compare to other methods, however the proposed method can be applied on any other methods just by adjusting its parameters. It is shown in this article that proposed approach performs better than other proposed methods for color constancy most of the time.

  • Research Article
  • 10.1007/s00138-021-01190-w
In color constancy: data mattered more than network
  • Mar 20, 2021
  • Machine Vision and Applications
  • Zhuo-Ming Du + 2 more

The objective of this paper is to argue that data mattered more than network in terms of color constancy. Computational color constancy is a linear operation device-dependent, which is part of the camera imaging pipeline. We extend the dataset based on this pipeline and prove that the scene illumination can be predicted using a very simple network as long as the dataset is large enough and evenly distributed. In the process of expanding the dataset, firstly, we remove illumination color casts in images which is ground-truth illumination color and then casts randomly generated evenly distributed illumination colors in images. We randomly generate five labels for each image and then work on the image to obtain this dataset. Using this dataset, we introduce a very simple network that is able to compute the color mapping function to correct the image’s colors. Experiments on our new datasets demonstrate that the method of this paper significantly outperforms the state-of-the-art color constancy methods.

  • Conference Article
  • Cite Count Icon 47
  • 10.1109/cvpr42600.2020.00332
Multi-Domain Learning for Accurate and Few-Shot Color Constancy
  • Jun 1, 2020
  • Jin Xiao + 2 more

Color constancy is an important process in camera pipeline to remove the color bias of captured image caused by scene illumination. Recently, significant improvements in color constancy accuracy have been achieved by using deep neural networks (DNNs). However, existing DNNbased color constancy methods learn distinct mappings for different cameras, which require a costly data acquisition process for each camera device. In this paper, we start a pioneer work to introduce multi-domain learning to color constancy area. For different camera devices, we train a branch of networks which share the same feature extractor and illuminant estimator, and only employ a camera-specific channel re-weighting module to adapt to the camera-specific characteristics. Such a multi-domain learning strategy enables us to take benefit from crossdevice training data. The proposed multi-domain learning color constancy method achieved state-of-the-art performance on three commonly used benchmark datasets. Furthermore, we also validate the proposed method in a fewshot color constancy setting. Given a new unseen device with limited number of training samples, our method is capable of delivering accurate color constancy by merely learning the camera-specific parameters from the few-shot dataset. Our project page is publicly available at https://github.com/msxiaojin/MDLCC.

  • Research Article
  • Cite Count Icon 28
  • 10.1109/lsp.2014.2366973
Color Cat: Remembering Colors for Illumination Estimation
  • Jun 1, 2015
  • IEEE Signal Processing Letters
  • Nikola Banic + 1 more

Having images look the same regardless of the scene illumination is a desirable feature called color constancy. In this paper the Color Cat (CC), a novel fast and accurate learning-based method for achieving computational color constancy is proposed. It learns and then uses the relationship between transformed color histograms and the regularity in the possible illumination colors. The proposed method is tested on a publicly available color constancy dataset and it is shown to outperform most of the other color constancy methods in terms of accuracy and computation cost. The results are presented and discussed. The source code is available at http://www.fer.unizg.hr/ipg/resources/color_constancy/ .

  • Conference Article
  • Cite Count Icon 1
  • 10.1117/12.598036
Color constancy using fractals
  • Mar 11, 2005
  • Hawley K Rising Iii + 1 more

We combine fractal decompression and the Retinex algorithm to devise a new color constancy method. We showthat by using this approach, we can achieve color constancy and image compression simultaneously. Experimentalresults are included that show that this approach is quite promising.Keywords: Fractals, color constancy, Retinex, diusion 1. INTRODUCTION At “rst glance, the juxtaposition of color constancy, a topic which involves negating the eects of lighting andchanges in dynamic range with fractals, a topic usually identi“ed with dynamical systems or image compression,may seem odd. We show here that this is not the case. In the process, we show that there is surprisingly littleto do to convert a fractal decompression algorithm into an algorithm for color constancy, and that even such adiscontinuous mapping can eect a change in dynamic range that makes sense to the eye.First we will go over the two topics in sucient detail to support this argument. We will “rst recount theRetinex algorithm, which is our method for approaching color constancy. There are two published types, wewill use the algorithm known as McCann99 [1]. Once we have established the basics of this algorithm, we willreview the theory and execution of a rudimentary fractal compression and decompression algorithm. Finally,with this ground work in place, we will show how to put the two algorithms together, and present our resultsand our pointers at future work.

  • Conference Article
  • Cite Count Icon 11
  • 10.1109/mmsp.1999.793805
Moment based normalization of color images
  • Jan 1, 1999
  • R Lenz + 2 more

In many multi-media applications it is desirable to separate the influence of the illumination sources and imaging equipment from the properties of the depicted scene. The ability of the human visual system to solve this task in many situations is known as color constancy. Technical applications of these methods include automatic color correction and illumination independent search in image databases. Many conventional computational color constancy methods assume that the effect of an illumination change can be described by a matrix multiplication with a diagonal matrix. In this paper we introduce a color normalization algorithm which computes the unique color transformation matrix which normalizes a given set of moments computed from the color distribution of an image. This normalization procedure is a generalization of the channel independent color constancy methods since general matrix transformations are considered. We compare the performance of this new normalization method with conventional color constancy methods. The experiments show that diagonal transformation matrices provide a better illumination compensation. This shows that the color moments also contain significant information about the color distributions of the objects in the image which is independent of the illumination characteristics. In another set of experiments we use the unique transformation matrix as a descriptor of the set of moments which describe the global color distribution in the image. Combining the matrices computed from two such images describes the color differences between them. We then use this as a tool for color dependent search in image databases. This matrix based color search is computationally less demanding than histogram based color search tools.

  • Research Article
  • 10.1364/josaa.506999
Nighttime color constancy using robust gray pixels.
  • Feb 20, 2024
  • Journal of the Optical Society of America A
  • Cheng Cheng + 4 more

Color constancy is a basic step for achieving stable color perception in both biological visual systems and the image signal processing (ISP) pipeline of cameras. So far, there have been numerous computational models of color constancy that focus on scenes under normal light conditions but are less concerned with nighttime scenes. Compared with daytime scenes, nighttime scenes usually suffer from relatively higher-level noise and insufficient lighting, which usually degrade the performance of color constancy methods designed for scenes under normal light. In addition, there is a lack of nighttime color constancy datasets, limiting the development of relevant methods. In this paper, based on the gray-pixel-based color constancy methods, we propose a robust gray pixel (RGP) detection method by carefully designing the computation of illuminant-invariant measures (IIMs) from a given color-biased nighttime image. In addition, to evaluate the proposed method, a new dataset that contains 513 nighttime images and corresponding ground-truth illuminants was collected. We believe this dataset is a useful supplement to the field of color constancy. Finally, experimental results show that the proposed method achieves superior performance to statistics-based methods. In addition, the proposed method was also compared with recent deep-learning methods for nighttime color constancy, and the results show the method's advantages in cross-validation among different datasets.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 13
  • 10.1109/access.2020.3030912
Deep Learning-Based Computational Color Constancy With Convoluted Mixture of Deep Experts (CMoDE) Fusion Technique
  • Jan 1, 2020
  • IEEE Access
  • Ho-Hyoung Choi + 1 more

In the human and computer vision, color constancy is the ability to perceive the true color of objects in spite of changing illumination conditions. Color constancy is remarkably benefitting human and computer vision issues such as human tracking, object and human detection and scene understanding. Traditional color constancy approaches based on the gray world assumption fall short of performing a universal predictor, but recent color constancy methods have greatly progressed with the introduction of convolutional neural networks (CNNs). Yet, shallow CNN-based methods face learning capability limitations. Accordingly, this article proposes a novel color constancy method that uses a multi-stream deep neural network (MSDNN)-based convoluted mixture of deep experts (CMoDE) fusion technique in performing deep learning and estimating local illumination. In the proposed method, the CMoDE fusion technique is used to extract and learn spatial and spectral features in an image space. The proposed method distinctively piles up layers both in series and in parallel, selects and concatenates effective paths in the CMoDE-based DCNN, as opposed to previous works where residual networks stack multiple layers linearly and concatenate multiple paths. As a result, the proposed CMoDE-based DCNN brings significant progress towards efficiency of using computing resources, as well as accuracy of estimating illuminants. In the experiments, Shi's Reprocessed, gray-ball and NUS-8 Camera datasets are used to prove illumination and camera invariants. The experimental results establish that this new method surpasses its conventional counterparts.

  • Book Chapter
  • Cite Count Icon 220
  • 10.1007/bfb0055683
Is machine colour constancy good enough?
  • Jan 1, 1998
  • Brian Funt + 2 more

This paper presents a negative result: current machine colour constancy algorithms are not good enough for colour-based object recognition. This result has surprised us since we have previously used the better of these algorithms successfully to correct the colour balance of images for display. Colour balancing has been the typical application of colour constancy, rarely has it been actually put to use in a computer vision system, so our goal was to show how well the various methods would do on an obvious machine colour vision task, namely, object recognition. Although all the colour constancy methods we tested proved insufficient for the task, we consider this an important finding in itself. In addition we present results showing the correlation between colour constancy performance and object recognition performance, and as one might expect, the better the colour constancy the better the recognition rate.KeywordsConvex HullColour ConstancyColour CorrectionColour BalanceClipping LevelThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.