Abstract

Having a profound understanding of the surrounding environment is considered one of the crucial tasks for the reliable operation of future self-driving cars. Light Detection and Ranging (LiDAR) sensor plays a critical role in achieving such understanding due to its capability to perceive the world in 3D. Similar to 2D perception tasks, current state-of-the-art methods in 3D perception tasks rely on deep neural networks (DNNs). However, the performance of 3D perception tasks, specially point-wise semantic segmentation, is not on par with their 2D counterparts. One of the main reasons is the lack of publicly available labelled 3D point cloud datasets (PCDs) from 3D LiDAR sensors. In this work, we are introducing the VoxelScape dataset, a large-scale simulated 3D PCD with 100K annotated point cloud scans. The annotations in the VoxelScape dataset include both point-wise semantic labels and 3D bounding box labels. Additionally, we used a number of baseline approaches to validate the transferability of VoxelScape to real 3D PCD for two challenging 3D perception tasks. The promising results have shown that training DNNs on VoxelScape boosted the performance of the 3D perception tasks on the real PCD. Furthermore, we are also releasing the proposed data generation pipeline for the research community to facilitate realistic simulation of 3D LiDAR point cloud data for different scenarios beyond those covered in our VoxelScape dataset. The VoxelScape dataset and the corresponding LiDAR simulation codes are publicly available at https://voxel-scape.github.io/dataset

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call