Abstract

This study proposes a generic approach which performs a series of systematic analyses by first introducing a data volume decomposition method to generate useful data features for performing semantic segmentation analysis involving 3D point-cloud data. Pipeline parallelism protocol is then implemented to accelerate the deep learning model's training phase. Our proposed approach is verified by decomposing around 2.0 billion point-cloud data points, as extracted from an open-source Semantic3D dataset, into many 3D regular structures with defined numbers of voxels. Each derived 3D structure has imposed normality in their data distribution of the respective label classes. Using the optimal hyperparameters for model training, the resulting trained model achieves average overall accuracy (mOA) and average intersection over union (mIOU) values of 0.984 and 0.752, respectively, on a testing dataset having close to 800 million point-cloud data points. The results are comparable with that of other state-of-the-art models in the literature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call