Abstract

In recent years, with the development of computer vision, deep learning, and artificial intelligence technologies, the popularity of depth sensors and lidar has promoted the rapid development of three-dimensional (3D) point cloud semantic segmentation. The semantic segmentation of 3D point clouds for large-scale unstructured agricultural scenes is important for agricultural robots to perceive their surrounding environment, and for autonomous navigation and positioning and autonomous scene understanding. In this study, the problem of 3D point cloud semantic segmentation for large-scale unstructured agricultural scenes was studied. By improving the neural network structure of RandLA-Net, a deeper 3D point cloud semantic segmentation neural network model for large-scale unstructured agricultural scenes was built, and good experimental results were obtained. The local feature aggregation module in RandLA-Net was integrated and improved to achieve 3D point cloud semantic segmentation for large-scale unstructured agricultural scenes. To test the influence of the 3D point cloud sampling algorithm on the overall accuracy (OA) and mean intersection-over-union (mIoU) of semantic segmentation, the random sampling algorithm and farthest point sampling algorithm were used to build two models with the same neural network structure. The test results show that the sampling algorithm has little effect on the OA and mIoU of 3D point cloud semantic segmentation, and the final result depends mainly on the extraction of 3D point cloud features. In addition, two different Semantic3D datasets were used to test the effect of the datasets on the generalization ability of the model, and the results showed that the datasets had an important effect on the neural network model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call