Abstract

Three-dimensional (3D) point cloud semantic segmentation is an essential part of computer vision for scene comprehension. Nevertheless, due to their loss of detail, existing networks lack the ability to recognize complex scenes. This paper proposes a novel network architecture , called the ring grouping neural network with attention module (RGAM), which presents four improvements over the existing networks. First, novel multi-scale ring grouping learning is designed to extract the multi-scale neighborhood features without overlapped sampling, allowing the network to adapt to objects of different scales. Second, neighborhood information fusion is defined as the weighted sum of multiple neighborhood features, enabling the representation of each point to be considered in different neighborhoods. Third, in the global view, a spatial attention module is introduced among the neighborhoods, allowing long-range contextual information to be exploited for 3D point cloud semantic segmentation. Finally, a channel attention module is appended to the RGAM: the correlation of each channel with key information enhances the complex scene recognition ability of the RGAM. Experimental results on the challenging S3DIS, ScanNet, and NYU-V2 datasets demonstrate that the RGAM has stronger recognition ability than the existing networks based on several state-of-the-art algorithms for 3D point cloud semantic segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call