Abstract

Abstract. Creating virtual duplicates of the real world has garnered significant attention due to its applications in areas such as autonomous driving, urban planning, and urban mapping. One of the critical tasks in the computer vision community is semantic segmentation of outdoor collected point clouds. The development and research of robust semantic segmentation algorithms heavily rely on precise and comprehensive benchmark datasets. In this paper, we present the York University Teledyne Optech 3D Semantic Segmentation Dataset (YUTO Semantic), a multi-mission large-scale aerial LiDAR dataset specifically designed for 3D point cloud semantic segmentation. The dataset comprises approximately 738 million points, covering an area of 9.46 square kilometers, which results in a high point density of 100 points per square meter. Each point in the dataset is annotated with one of nine semantic classes. Additionally, we conducted performance tests of state-of-the-art algorithms to evaluate their effectiveness in semantic segmentation tasks. The YUTO Semantic dataset serves as a valuable resource for advancing research in 3D point cloud semantic segmentation and contributes to the development of more accurate and robust algorithms for real-world applications. The dataset is available at https://github.com/Yacovitch/YUTO_Semantic.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call