Illuminant Estimation Using RGB Camera Image and Ambient Light Sensor Signal

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACT Illuminant estimation in camera color space is a critical step in the image signal processing pipeline, which is commonly performed based on the RGB RAW images captured by a camera. Recently, most smartphones are equipped with an ambient light sensor (ALS) around the camera. In this work, we propose an illuminant estimation method (i.e., RACC) using both RGB RAW images and ALS signals as the inputs. A large dataset containing both RGB RAW images and ALS signals of more than 1300 scenes was collected, which was used to develop and evaluate the proposed RACC method. The results show that the RACC method results in much smaller angular errors and requires much fewer computational resources. In particular, the method was designed with a training mechanism to consider a practical challenge of incomplete or missing ALS signals under low light conditions, which was verified to have stable performance. These clearly suggest that the RACC method can be deployed for practical applications.

Similar Papers
  • Conference Article
  • Cite Count Icon 3
  • 10.1109/iscas48785.2022.9937670
An End-to-end Computer Vision System Architecture
  • May 28, 2022
  • Ling Zhang + 3 more

To overcome the data movement bottleneck, near-sensor and in-sensor computing are becoming more and more popular. However, in the existing near-/in-sensor computing architectures for vision tasks, the effect of the image signal processing (ISP) pipeline, which is of great importance to the final vision performance [1], is always ignored. In this work, we propose a synthesized RAW image-based end-to-end computer vision paradigm, taking the effect of ISP pipeline into account. In the proposed approach, a generative adversarial network (GAN)-based tool is used to convert the fully processed color images to their corresponding RAW Bayer versions, generating the training data for end-to-end vision models. In the inference stage, RAW images from the sensor are directly fed to the end-to-end model, bypassing the entire ISP pipeline. Experimental results show that by training/tuning the CNN models using synthesized RAW images, it is possible to design an end-to-end (from RAW image to vision task) vision system that directly consumes RAW image data from the sensor with negligible vision performance degradation. By skipping the ISP pipeline, an image sensor can be directly integrated with the back-end vision processor without a complex image processor in the middle, making near-/in-sensor computing a practical approach.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.displa.2024.102637
RAWIW: RAW Image Watermarking robust to ISP pipeline
  • Jan 24, 2024
  • Displays
  • Kang Fu + 5 more

RAWIW: RAW Image Watermarking robust to ISP pipeline

  • Conference Article
  • Cite Count Icon 3
  • 10.1109/icra46639.2022.9812052
Refactoring ISP for High-Level Vision Tasks
  • May 23, 2022
  • Yongjie Shi + 3 more

The image signal processing (ISP) pipeline, which transforms raw sensor measurement to a color image, is composed of a sequence of processing modules. Traditionally, the ISP pipeline is manually tuned by experts for human perception. The resulting handcrafted ISP configuration does not necessarily benefit the downstream high-level vision tasks. To mitigate these problems, this paper presents a simple yet effective framework based on Evolutionary Algorithm to search for a set of compact ISP configurations for high-level vision tasks. In particular, we encode ISP structure into a binary string and ISP parameters into a set of float numbers. Then we jointly optimize them with task-specific loss and ISP computation budgets (e.g., running time) through solving a nonlinear multi-objective optimization problem. By mutating the configurations of the ISP pipeline, we are able to remove redundant modules and design an ISP with both low cost and high accuracy. We validate the proposed method on extreme noisy and low-light raw images, and experimental results show that our framework can help find effective and efficient ISP configurations for both object detection and semantic segmentation tasks. We further provide a detailed analysis on the importance of different modules in the ISP configurations, which benefits the design of ISP for downstream tasks in the future.

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.sigpro.2023.109135
BMISP: Bidirectional mapping of image signal processing pipeline
  • Jun 7, 2023
  • Signal Processing
  • Yahui Tang + 3 more

BMISP: Bidirectional mapping of image signal processing pipeline

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 6
  • 10.1109/access.2021.3053607
LLISP: Low-Light Image Signal Processing Net via Two-Stage Network
  • Jan 1, 2021
  • IEEE Access
  • Hongjin Zhu + 5 more

Images taken in extremely low light suffer from various problems such as heavy noise, blur, and color distortion. Assuming the low-light images contain a good representation of the scene content, current enhancement methods focus on finding a suitable illumination adjustment but often fail to deal with heavy noise and color distortion. Recently, some works try to suppress noise and reconstruct low-light images from raw data. But these works apply a network instead of an image signal processing pipeline (ISP) to map the raw data to enhanced results which leads to heavy learning burden for the network and get unsatisfactory results. In order to remove heavy noise, correct color bias and enhance details more effectively, we propose a two-stage Low Light Image Signal Processing Network named LLISP. The design of our network is inspired by the traditional ISP: processing the images in multiple stages according to the attributes of different tasks. In the first stage, a simple denoising module is introduced to reduce heavy noise. In the second stage, we propose a two-branch network to reconstruct the low-light images and enhance texture details. One branch aims at correcting color distortion and restoring image content, while another branch focuses on recovering realistic texture. Experimental results demonstrate that the proposed method can reconstruct high-quality images from low-light raw data and replace the traditional ISP.

  • Research Article
  • 10.1109/tpami.2025.3567308
Image Lens Flare Removal Using Adversarial Curve Learning.
  • Sep 1, 2025
  • IEEE transactions on pattern analysis and machine intelligence
  • Yuyan Zhou + 3 more

When taking images against strong light sources, the resulting images often contain heterogeneous flare artifacts. These artifacts can significantly affect image visual quality and downstream computer vision tasks. While collecting real data pairs of flare-corrupted/flare-free images for training flare removal models is challenging, current methods utilize the direct-add approach to synthesize training data. However, these methods do not consider automatic exposure and tone mapping in the image signal processing pipeline (ISP), leading to the limited generalization capability of deep model training using such data. Besides, existing light source recovery methods hardly recover multiple light sources due to the different sizes, shapes, and illuminance of various light sources. In this paper, we propose a solution to improve the performance of lens flare removal by revisiting the ISP, remodeling the principle of automatic exposure in the synthesis pipeline, and designing a more reliable light source recovery strategy. The new pipeline approaches realistic imaging by discriminating the local and global illumination through a convex combination, avoiding global illumination shifting and local over-saturation. Moreover, the current deep models are only generalized to specific devices due to the diversity of cameras' ISPs. To achieve better generalization on different devices, we formulate the generalization problem as an adversarial training problem and embed an adversarial curve learning (ACL) paradigm in the synthesis pipeline to gain better performance. For recovering multiple light sources, our strategy convexly averages the input and output of the neural network based on illuminance levels, thereby avoiding the need for a hard threshold in identifying light sources. We also contribute a new flare removal testing dataset containing the flare-corrupted images captured by fifteen types of consumer electronics. The dataset facilitates the verification of the generalization capability of flare removal methods. Extensive experiments show that our solution can effectively improve the performance of lens flare removal and push the frontier toward more general situations.

  • Research Article
  • 10.3390/app15063371
Conditional GAN-Based Two-Stage ISP Tuning Method: A Reconstruction–Enhancement Proxy Framework
  • Mar 19, 2025
  • Applied Sciences
  • Pengfei Zhan + 1 more

Image signal processing (ISP), a critical component in camera imaging, has traditionally relied on experience-driven parameter tuning. This approach suffers from inefficiency, fidelity issues, and conflicts with visual enhancement objectives. This paper introduces ReEn-GAN, an innovative staged ISP proxy tuning framework. ReEn-GAN decouples the ISP process into two distinct stages: reconstruction (physical signal recovery) and enhancement (visual quality and color optimization). By employing distinct network architectures and loss functions tailored to specific objectives, the two-stage proxy can effectively optimize both the reconstruction and enhancement modules within the ISP pipeline. Compared to tuning with an end-to-end proxy network, the proposed method’s proxy more effectively extracts hierarchical information from the ISP pipeline, thereby mitigating the significant changes in image color and texture that often result from parameter adjustments in an end-to-end proxy model. This paper conducts experiments on image denoising and object detection tuning tasks, and compares the performance of the two types of proxies. The results demonstrate that the proposed method outperforms end-to-end proxy methods on public datasets (SIDD, KITTI) and achieves over 21% improvement in performance metrics compared to hand-tuning methods.

  • Conference Article
  • Cite Count Icon 10
  • 10.1145/3211960.3211973
Keystroke inference using ambient light sensor on wrist-wearables
  • Jun 10, 2018
  • Mohd Sabra + 2 more

Many modern wrist-wearables, such as smartwatches and fitness trackers, are equipped with ambient light sensors that are able to capture the surrounding light levels. While an ambient light sensor is intended to make applications environment-aware, malicious applications can potentially misuse it to infer private information pertaining the wearer. Moreover, such an attack vector is hard to mitigate because the ambient light sensor is a part of the zero-permission sensor suite on most wearable platforms, i.e., any on-device application can access these sensors without requiring explicit user-level permissions. In this paper, we study the feasibility of how a malicious smartwatch application can leverage on ambient light sensor data to infer sensitive information about the wearer, specifically keystrokes typed by the wearer on an ATM keypad. While there are multiple previous works that target motion sensor data on wrist-wearables to infer keystrokes, we study the feasibility of how a similar attack can be conducted using an ambient light sensor. The characteristic differences between motion and light data, and how they are impacted during the keystroke activity, implies that existing inference frameworks that rely on motion data cannot be directly employed in this case. As a result, we design a new ambient light based keystroke inference framework which models the varying intensities of light on and around an ATM keypad to infer keystrokes. Our evaluation results indicate that an inference attack on keystrokes is moderately feasible, even with a coarse-grained ambient light sensor found on many low-cost wrist-wearables.

  • Research Article
  • Cite Count Icon 11
  • 10.1119/1.5064575
Characterization of linear light sources with the smartphone’s ambient light sensor
  • Nov 1, 2018
  • The Physics Teacher
  • Isabel Salinas + 3 more

The smartphone’s ambient light sensor has been used in the literature to study different physical phenomena. For instance, Malus’s law, which involves the polarized light, has been verified by using simultaneously the orientation and light sensors of a smartphone. The illuminance of point light sources has been characterized also using the light sensor of smartphones and tablets, demonstrating in this way the well-known inverse-square law of distance. Moreover, these kinds of illuminance measurements with the ambient light sensor have allowed the determination of the luminous efficiency of different quasi-point optical sources (incandescent and halogen lamps) as a function of the electric power supplied. Regarding mechanical systems, the inverse-square law of distance has also been used to investigate the speed and acceleration of a moving light source on an inclined plane or to study coupled and damped oscillations. In the present work, we go further in presenting a simple laboratory experiment using the smartphone’s ambient light sensor in order to characterize a non-point light source, a linear fluorescent tube in our case.

  • Book Chapter
  • 10.1007/978-3-030-94044-7_68
Characterization of Linear Light Sources with the Smartphone’s Ambient Light Sensor
  • Jan 1, 2022
  • Isabel Salinas + 3 more

The smartphone’s ambient light sensor has been used in the literature to study different physical phenomena [1–5]. For instance, Malus’s law, which involves the polarized light, has been verified by using simultaneously the orientation and light sensors of a smartphone [1]. The illuminance of point light sources has been characterized also using the light sensor of smartphones and tablets, demonstrating in this way the well-known inverse-square law of distance [2, 3]. Moreover, these kinds of illuminance measurements with the ambient light sensor have allowed the determination of the luminous efficiency of different quasi-point optical sources (incandescent and halogen lamps) as a function of the electric power supplied [4]. Regarding mechanical systems, the inverse-square law of distance has also been used to investigate the speed and acceleration of a moving light source on an inclined plane [5] or to study coupled and damped oscillations [6]. In the present work, we go further in presenting a simple laboratory experiment using the smartphone’s ambient light sensor in order to characterize a non-point light source, a linear fluorescent tube in our case.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.3390/s24113396
A Programmable Ambient Light Sensor with Dark Current Compensation and Wide Dynamic Range.
  • May 24, 2024
  • Sensors (Basel, Switzerland)
  • Nianbo Shi + 3 more

Ambient light sensors are becoming increasingly popular due to their effectiveness in extending the battery life of portable electronic devices. However, conventional ambient light sensors are large in area and small in dynamic range, and they do not take into account the effects caused due to a dark current. To address the above problems, a programmable ambient light sensor with dark current compensation and a wide dynamic range is proposed in this paper. The proposed ambient light sensor exhibits a low current power consumption of only 7.7 µA in dark environments, and it operates across a wide voltage range (2-5 V) and temperature range (-40-80 °C). It senses ambient light and provides an output current proportional to the ambient light intensity, with built-in dark current compensation to effectively suppress the effects of a dark current. It provides a wide dynamic range over the entire operating temperature range with three selectable output-current gain modes. The proposed ambient light sensor was designed and fabricated using a 0.18 µm standard CMOS process, and the effective area of the chip is 663 µm × 652 µm. The effectiveness of the circuit was verified through testing, making it highly suitable for portable electronic products and fluorescent fiber-optic temperature sensors.

  • Book Chapter
  • Cite Count Icon 3
  • 10.1007/978-3-031-19800-7_29
RAWtoBit: A Fully End-to-end Camera ISP Network
  • Jan 1, 2022
  • Wooseok Jeong + 1 more

Image compression is an essential and last processing unit in the camera image signal processing (ISP) pipeline. While many studies have been made to replace the conventional ISP pipeline with a single end-to-end optimized deep learning model, image compression is barely considered as a part of the model. In this paper, we investigate the designing of a fully end-to-end optimized camera ISP incorporating image compression. To this end, we propose RAWtoBit network (RBN) that can effectively perform both tasks simultaneously. RBN is further improved with a novel knowledge distillation scheme by introducing two teacher networks specialized in each task. Extensive experiments demonstrate that our proposed method significantly outperforms alternative approaches in terms of rate-distortion trade-off.KeywordsCamera networkKnowledge distillationImage compressionImage signal processing pipeline

  • Conference Article
  • Cite Count Icon 4
  • 10.1117/12.807927
An ultra-low-power ambient light sensor for portable devices
  • Feb 12, 2009
  • Soon-Ik Cho + 4 more

Power management is one of the most important issues in portable electronics like cell phone, PDA, UMPC, GPS, MP3 player and laptop computer. Ambient Light Sensors are getting popular as a most effective solution to extend battery lifetime for these devices. This paper provides basic information about ambient light sensors on a general level and introduces an ultra low-power ambient light sensor for portable electronics. The implemented ambient light sensor converts light illuminance to 5-digital codes every 300ms which can measure illuminance from 10 to 1000 Lux consuming only 5uA. An IR-reject optical filter and a built-in integrating analog-to-digital converter reduce influences of infrared ray and 50Hz/60Hz noise from artificial light sources respectively. The ambient light sensor is fabricated in a standard CMOS 0.5-um process technology. Test results show that the implemented ambient light sensor has incandescent/fluorescent light sensitivity ratio around 2.3.

  • Research Article
  • Cite Count Icon 4
  • 10.13031/aea.11678
Supplementary Light Source Development for Camera-Based Smart Spraying in Low Light Conditions
  • Jan 30, 2017
  • Applied Engineering in Agriculture
  • Travis Esau + 5 more

<abstract> <b><sc>Abstract. </sc></b>High wind constraints during day time agrochemical spraying has pushed the wild blueberry producers to apply agrochemicals during the early morning, evening or after dark, to avoid drift problems due to low wind conditions. The objective of this study was to develop an artificial light source system combined with a smart sprayer comprising of a digital camera-based sensing system to allow cameras to detect target areas (weed, plant or bare soil) in real-time for accurate application of agrochemicals in low light conditions. After testing and evaluation of different light sources, a rugged light source system equipped with polystyrene diffuser sheets was constructed to provide an even distribution of light across the entire 12.2 m machine vision sensor boom. Distribution of artificial light underneath the sensing boom at zero ambient light was examined by recording the light intensity at 0.15 m spacing on the ground under the camera boom using a lux meter. Results of light distribution revealed that the Magnafire<sup>®</sup> 70 W high intensity discharge (HID) lights provided wide angle of even light illumination, high intensity and rugged construction. A wild blueberry field was selected in central Nova Scotia, Canada, and a test track was made to evaluate the performance of the artificial light source system to apply agrochemicals on a spot-specific basis under low natural light conditions. A real-time kinematics-global positioning system (RTK-GPS) was used to map the boundary of the test track, selected bare soil areas, weed areas and wild blueberry plant areas in the field. Water sensitive papers (WSPs) were placed at randomly selected locations, the smart sprayer was operated under low light conditions, and the percent area coverage (PAC) was calculated. The mean PAC from WSP located in bare soil, weeds and blueberry spots in the track was 5.19%, 27.53%, and 1.74%, respectively. PAC of the WSPs placed in bare soil and blueberry patches were 22.34% and 25.79% lower than in weed patches, respectively. Results reported that the custom developed artificial light source system was accurate enough to detect targets in low light conditions. Additionally, spot-spacing only in weed areas resulted in 65% of chemical saving.

  • Conference Article
  • Cite Count Icon 2
  • 10.1117/12.585567
Illuminant estimation for multichannel images
  • Jan 17, 2005
  • Xiaoyun Jiang + 1 more

Multi-channel imaging gains more and more applications because of its advantage in better color reproduction and better spectral representation to avoid metamerism problem. Illuminant estimation for multi-channel images is not widely studied because most illuminant estimation methods are applied to trichromatic images. In this paper, some common illuminant estimation methods such as gray world and maximum RGB are extended to multi-channel images. Five methods are evaluated for multi-channel images including gray world, maximum RGB, Maloney-Wandell method, modified illuminant detection in linear space and reflectance constraint illuminant detection. The methods are evaluated in terms of illuminant detection efficiency through estimating the illuminant correlated color temperature. Among them, the method of reflectance constraint illuminant detection has the best efficiency. In addition, the former three methods, which were only used in illuminant estimation for three-channel images before, are attempted in illuminant spectral recovery. The recovery efficiencies are evaluated through comparing the difference between the recovered spectral distributions and the original ones. Maloney-Wandell method has large efficiency improvement when the number of channels increases from three to four. It also has the best spectral recovery among the three tested methods when the channel number is more than three.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.