Abstract

Point cloud segmentation is a fundamental problem. Due to the complexity of real-world scenes and the limitations of 3D scanners, interactive segmentation is currently the only way to cope with all kinds of point clouds. However, interactively segmenting complex and large-scale scenes is very time-consuming. In this paper, we present a novel interactive system for segmenting point cloud scenes. Our system automatically suggests a series of camera views, in which users can conveniently specify segmentation guidance. In this way, users may focus on specifying segmentation hints instead of manually searching for desirable views of unsegmented objects, thus significantly reducing user effort. To achieve this, we introduce a novel view preference model, which is based on a set of dedicated view attributes, with weights learned from a user study. We also introduce support relations for both graph-cut-based segmentation and finding similar objects. Our experiments show that our segmentation technique helps users quickly segment various types of scenes, outperforming alternative methods.

Highlights

  • With the prevalence of consumer-grade depth sensors (e.g., Microsoft Kinect), scanning our living environments is becoming easier

  • Semantic segmentation, which aims to provide a decomposition of a 3D point cloud into semantically meaningful objects, is one of the most fundamental problems, and is important for many subsequent tasks such as object detection [1], object recognition [2], scene understanding [3], etc

  • For M-D, while interactive segmentation was done on RGB-D images, we provided an additional point cloud viewer for examining the segmentation status

Read more

Summary

Introduction

With the prevalence of consumer-grade depth sensors (e.g., Microsoft Kinect), scanning our living environments is becoming easier. Semantic segmentation of 3D point clouds has been extensively studied, resulting in various techniques, based for instance on region growing [4, 5], graph-cut [6,7,8], learning [9,10,11], etc. Most of those approaches attempted to achieve semantic segmentation with little or even no user intervention. Due to the complexity of real-world scenes and the limitations of 3D scanners, manual intervention is often inevitable [12]

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.