Abstract

This paper presents a 3D scene reconstruction method for autonomous vehicle driving in a wide range of outdoor environments. Autonomous vehicles, most of which currently employ laser and image sensors, are required to have systems for object detection, obstacle avoidance, navigation etc. The one of the most important pieces of information for these systems is an accurate dense 3D depth map. However, range data is much sparser than image data, thus the challenging problem is to reconstruct a dense depth map using sparse range and image data. Here we propose a novel approach to fuse these different types of sensor data to reconstruct 3D scenes which maintains the shape of local objects. Our method features two main phases: the local range modeling phase and 3D depth map reconstruction phase. In the local range modeling phase we interpolate 3D points from the laser scanner using Gaussian Process regression. It estimates 3D measurements across the outdoor environment and accommodates for defective sensor information. In the reconstruction phase, we fuse an image and interpolated points to build a 3D depth map and optimize based on a Markov Random Field. It provides a depth map corresponding to all image pixels. Qualitative and time complexity results show that our approach is robust and fast enough to demonstrate in real-time for an autonomous vehicle in complex urban scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call