Abstract

It is a difficult challenge for humans to carry out environmental perception work at night and in low-light scenes. Depending on its extraordinary working performance in the dark, starlight camera is widely used in night driving assistance and various surveillance missions. However, the starlight camera images are lack of colorful information, which prevents users from understanding. This paper proposes a novel approach for colorizing starlight images using Generative Adversarial Network (GAN) architecture. The proposed method overcomes the time-space asynchronism of traditional heterogeneous data acquisition. We firstly introduce starlight-RGB image pairs generation. Inspired by 3D perspective transformation, we use LiDAR, camera and Inertial Measurement Unit(IMU) data to create generated visible images. We collect synchronous visible iamges, LiDAR points data and IMU data in the daytime and acquire LiDAR, starcam and IMU data at night. Such image pair generation method overcomes the difficulty of obtaining pairs of data and image pairs are aligned at pixel-level. As there are no reflection LiDAR points in the sky, the perspective projection images have no content in the sky areas. Based on supervised image-to-image translation GAN architecture, we use daytime RGB images as unpaired data, which is in order to restore the texture and color of the sky. We use KITTI dataset as validation, and get good experimental performance on our datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call