Abstract
PurposeWith the upgrading of three-dimensional (3D) sensing devices, the amount of point cloud data collected has also increased exponentially. However, most of the existing methods also have unbalanced optimizations in memory consumption and semantic segmentation efficiency. This research addresses the need for a more balanced approach in processing large-scale point cloud data efficiently.Design/methodology/approachThis research used a network framework (DSF-Net) based on dual-path deep and shallow networks and designed a point cloud space pyramid pooling module based on hole convolution. The 3D point cloud data are trained separately by integrating the deep branch and shallow branch networks. Besides, a deep and shallow fusion module fuses the deep and shallow feature relationships and outputs several loss functions for convergence training.FindingsIt is found that DSF-Net solves the problem of segmentation efficiency, achieves a balanced effect while ensuring the ability of a large range of point cloud input and reduces the memory consumption.Originality/valueThe deep network can extract high-level semantic information, while the shallow neural network has fewer neural network layers and faster inference speed. Meanwhile, random sampling and point-atrous spatial pyramid pool modules are used, respectively, for deep and shallow networks to capture multi-scale local context information in point cloud.
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have