Abstract

Semantic segmentation of 3D scenes is a fundamental problem in 3D computer vision. In this paper, we propose a deep neural network for 3D semantic segmentation of raw point clouds. A multi-scale feature learning block is first introduced to obtain informative contextual features in 3D point clouds. A global and local feature aggregation block is then extended to improve the feature learning ability of the network. Based on these strategies, a powerful architecture named 3DMAX-Net is finally provided for semantic segmentation in raw 3D point clouds. Experiments have been conducted on the Stanford large-scale 3D Indoor Spaces Dataset using only geometry information. Experimental results have clearly shown the superiority of the proposed network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call