Abstract

Cameras onboard autonomous, as a critial component of the sensor system of automatic driving, plays a vital role in perception of driving and road environment. However, in some bad weather or unpredictable situations, the image quality obtained by the in-vehicle sensing camera is not ideal, which will become an extremely unsafe factor for autonomous driving. In order to improve the safety of self-driving vehicles, we proposed a novel high-quality image of invehicle cameras generation approach CE-GAN, a conditional generative adversarial network that attempt to leverage the point cloud data from on-board lidar to compensate the defect of visible image to improve the image quality of on-board cameras. Inspired by the generative adversarial networks, our method establishes an adversarial game between the generator and the discriminator We designed specifically loss function for different reasons for image quality impairment including partially obscured and fogged. Consequently, extensive experiments show that CE-GAN renders better performance in detail texture, compared with conventional Cycle-GAN, pix2pix methods without assistance of LiDAR data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call