Abstract

We address the problem of classifying 3D point clouds: given 3D urban street scenes gathered by a lidar sensor, we wish to assign a class label to every point. This work is a key step toward realizing applications in robots and cars, for example. In this paper, we present a novel approach to the classification of 3D urban scenes based on super-segments, which are generated from point clouds by two stages of segmentation: a clustering stage and a grouping stage. Then, six effective normal and dimension features that vary with object class are extracted at the super-segment level for training some general classifiers. We evaluate our method both quantitatively and qualitatively using the challenging Velodyne lidar data set. The results show that by only using normal and dimension features we can achieve better recognition than can be achieved with high-dimensional shape descriptors. We also evaluate the adopting of the MRF framework in our approach, but the experimental results indicate that thisbarely improved the accuracy of the classified results due to the sparse property of the super-segments.

Highlights

  • With the increasing availability of 3D sensors such as lidar sensors, stereo and SFM(Structure From Motion) systems, the 3D point clouds of urban scenes are easier than ever to collect

  • We presented a novel framework for the classification of 3D point clouds using super‐segments

  • Our main contribution is a proposed segmentation algorithm to generate super‐segments, which consist of planar surfaces of large‐scale man‐made objects and small‐scale individual objects

Read more

Summary

Introduction

With the increasing availability of 3D sensors such as lidar sensors, stereo and SFM(Structure From Motion) systems, the 3D point clouds of urban scenes are easier than ever to collect. It is important to provide a suite that can automatically segment and classify the data into object classes, since the segmentation and the classification of 3D point clouds are critical for several important applications including scene understanding, autonomous robots and cars. YInut JZhAoduv,RYoabooYtiuc ,SGy,u2il0ia1n2g, LVuola.n9d, 2Si4d8a:n20D1u2: 1 Super-Segments Based Classification of 3D Urban Street Scenes. Taking autonomous cars as an example, to move safely and efficiently through urban streets, as Figure 1 shows, the autonomous car needs to distinguish between the ground and obstacles, which include people, cars, signs, houses and fences, and obtain the location of the obstacles at the same time

Literature Review
Overview of the Framework
Segmentation of Point Clouds
Clustering Stage
Grouping Stage
Features of Super‐segments
Experiments
Quantitative and Qualitative Evaluation
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.