Abstract

Point cloud semantic segmentation is essential for comprehending and analyzing scenes. However, performing semantic segmentation on large-scale point clouds presents challenges, including demanding high memory requirements, a lack of structured data, and the absence of topological information. This paper presents a novel method based on the Reverse Attention Adaptive Fusion network (RAAFNet) for segmenting large-scale point clouds. RAAFNet consists of a reverse attention encoder–decoder module, an adaptive fusion module, and a local feature aggregation module. The reverse attention encoder–decoder module is applied to extract point cloud features at different scales. The adaptive fusion module enhances fine-grained representation within multi-resolution feature maps. Furthermore, a local aggregation classifier is introduced, which aggregates the features of neighboring points to the center point in order to leverage contextual information and enhance the classifier’s perceptual capability. Finally, the predicted labels are generated. Notably, our method excels at extracting point cloud features across different dimensions and produces highly accurate segmentation results. Experimental results on the Semantic3D dataset achieved an overall accuracy of 89.9% and a mIoU of 74.4%.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.