Abstract

RGBN cameras that can capture visible light and near-infrared (NIR) light simultaneously produce better color image quality in low-light-level conditions. However, these RGBN cameras introduce additional color bias caused by the mixing of visible information and NIR information. The color correction matrix model widely used in current commercial color digital cameras cannot handle the complicated mapping function between biased color and ground truth color. Convolutional neural networks (CNNs) are good at fitting such complicated relationships, but they require a large quantity of training image pairs of different scenes. In order to achieve satisfactory training results, large amounts of data must be captured manually, even when data augmentation techniques are applied, requiring significant time and effort. Hence, a data generation method for training pairs that are consistent with target RGBN camera parameters, based on an open access RGB-NIR dataset, is proposed. The proposed method is verified by training an RGBN camera color restoration CNN model with generated data. The results show that the CNN model trained with the generated data can achieve satisfactory RGBN color restoration performance with different RGBN sensors.

Highlights

  • Most commercial color digital cameras capture color spectral information through a Bayer 25% red, 50% green, 25% blue (RGGB) color filter array (CFA) coating in front of the sensor

  • The results show that the Convolutional neural networks (CNNs) model trained with the generated data can achieve satisfactory RGBN color restoration performance with different RGBN sensors

  • The proposed method obtains better results on angular error (AE), Eab, and peak SNR (PSNR), but slightly worse results on structural similarity index measure (SSIM) compared with the Han method

Read more

Summary

Introduction

Most commercial color digital cameras capture color spectral information through a Bayer 25% red, 50% green, 25% blue (RGGB) color filter array (CFA) coating in front of the sensor. The responses are limited to visible light. A single camera sensor can capture different spectral information at different pixel positions with a CFA at the cost of lower spatial resolution. Full-band color information for each pixel is calculated through demosaicing methods. In order to prevent the mixing of visible light and near-infrared (NIR) light, an infrared cut-off filter (IRCF) is placed in front of the sensor. A red, green, blue, near-infrared (RGBN) CFA has recently been introduced that replaces a green

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.