Abstract

3D Instance segmentation is a fundamental task in computer vision. Effective segmentation plays an important role in robotic tasks, augmented reality, autonomous driving, etc. With the ascendancy of convolutional neural networks in 2D image processing, the use of deep learning methods to segment 3D point clouds receives much attention. A great convergence of training loss often requires a large amount of human-annotated data, while making such a 3D dataset is time-consuming. This paper proposes a method for training convolutional neural networks to predict instance segmentation results using synthetic data. The proposed method is based on the SGPN framework. We replaced the original feature extractor with “dynamic graph convolutional neural networks” that learned how to extract local geometric features and proposed a simple and effective loss function, making the network more focused on hard examples. We experimentally proved that the proposed method significantly outperforms the state-of-the-art method in both Stanford 3D Indoor Semantics Dataset and our datasets.

Highlights

  • Segmentation is an important means to make data easier to understand and analyze

  • Inspired by SGPN [7], which uses a single network for performing instance segmentation on point clouds, we propose a simple and effective method

  • We propose a novel method for instance segmentation on point cloud without color

Read more

Summary

INTRODUCTION

Segmentation is an important means to make data easier to understand and analyze. It is helpful for robot tasks [1], autonomous driving [2], augmented reality [3], and visual servoing [4]. The RGBD image or scene point cloud contains a lot of redundant information, for instance, irrelevant objects and background. The segmentation method does mitigate computational cost and improve the precision of pose estimation [9]–[13]. Inspired by SGPN [7], which uses a single network for performing instance segmentation on point clouds, we propose a simple and effective method. The proposed method can obtain the training data by synthesis With this method, we can recognize almost all the target objects in the scene and pick out the appropriate point cloud for some robot tasks such as pose estimation and grasping.

RELATED WORK
DISTANCE-MASK
EXPERIMENT
Findings
CONCLUSIONS

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.