Abstract

Road scene reconstruction is a fundamental and crucial module at the perception phase for autonomous vehicles, and will influence the later phase, such as object detection, motion planing and path planing. Traditionally, self-driving car uses Lidar, camera or fusion of the two kinds of sensors for sensing the environment. However, single Lidar or camera-based approaches will miss crucial information, and the fusion-based approaches often consume huge computing resources. We firstly propose a conditional Generative Adversarial Networks (cGANs)-based deep learning model that can rebuild rich semantic scene images from upsampled Lidar point clouds only. This makes it possible to remove cameras to reduce resource consumption and improve the processing rate. Simulation on the KITTI dataset also demonstrates that our model can reestablish color imagery from a single Lidar point cloud, and is effective enough for real time sensing on autonomous driving vehicles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call