Abstract

In outdoor Light Detection and Ranging (lidar)point cloud classification, finding the discriminative features for point cloud perception and scene understanding represents one of the great challenges. The features derived from defect-laden (i.e., noise, outliers, occlusions and irregularities) and raw outdoor LiDAR scans usually contain redundant and irrelevant information which adversely affects the accuracy of point semantic labeling. Moreover, point cloud features of different views have a capability to express different attributes of the same point. The simplest way of concatenating these features of different views cannot guarantee the applicability and effectiveness of the fused features. To solve these problems and achieve outdoor point cloud classification with fewer training samples, we propose a novel multi-view features and classifiers’ joint learning framework. The proposed framework uses label consistency and local distribution consistency of multi-space constraints for multi-view point cloud features extraction and classification. In the framework, the manifold learning is used to carry out subspace joint learning of multi-view features by introducing three kinds of constraints, i.e., local distribution consistency of feature space and position space, label consistency among multi-view predicted labels and ground truth, and label consistency among multi-view predicted labels. The proposed model can be well trained by fewer training points, and an iterative algorithm is used to solve the joint optimization of multi-view feature projection matrices and linear classifiers. Subsequently, the multi-view features are fused and used for point cloud classification effectively. We evaluate the proposed method on five different point cloud scenes and experimental results demonstrate that the classification performance of the proposed method is at par or outperforms the compared algorithms.

Highlights

  • In recent years, with the rapid advancement of computer vision and Light Detection and Ranging technology, an increasing number of point clouds are acquired and widely used in various remote-sensing applications

  • We propose a features extraction and point clouds classification model based on multiple views and space representation consistency under constraints of label consistency (MvsRCLC)

  • (2) Unlike previous methods that concatenate multi-view features for classifification, we propose a multi-view subspace learning method using diversity and consistency constraints between multi-vieewwffeeaatuturreess,anadndthtehnemn umltui-lvtii-evwiefweaftuearetusraensdatnhde ctlhaesscifilaesrsaifrieercoaurpelecdoutoplceldasstoifyclpaosisniftyclpoouidnst, tchloeuredbsy, tihmeprerboyviinmgplarobveliinngglacbceulirnagcya.ccuracy

Read more

Summary

Introduction

With the rapid advancement of computer vision and Light Detection and Ranging (lidar) technology, an increasing number of point clouds are acquired and widely used in various remote-sensing applications. The joint projection graph of the multi-view was constructed by the low-rank representations for clustering/classification All these methods outperform the single-view feature-learning methods, they are not applicable for classifying outdoor point clouds. The labeled vertices are projected to the original point cloud These deep learning-based methods have been obtaining good results, they rely on full 3D meshes to generate multi-view rendering, which is difficult to enable the reliable 3D meshes for outdoor point clouds. To the best of our knowledge, there does not exist any multi-view learning method that is directly applied to point cloud classification To fill this gap, we propose a features extraction and point clouds classification model based on multiple views and space representation consistency under constraints of label consistency (MvsRCLC). NNeexxtt,, tthhee pprrooppoosseedd mmooddeell iiss ooppttiimmiizzeedd bbeeffoorree iitt iiss uusseedd ttoo ccllaassssiiffyy ppooiinntt cclloouuddss

Multi-View Point Cloud Feature Extraction
Objective Function of MvsRCLC
Optimization Technique
Update of W
Update of G
Update of H
Point Cloud Labeling
Performance Evaluation
Experiment Data and Evaluation Metrics
Experimental Results
The First Experimental Group
(1) Comparison methods
Methods
Our Method
Method OA mIoU Kappa
Parameters Analysis
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call