Abstract

Point cloud classification is regarded as a critical task in remote sensing data interpretation, which is widely used in many fields. Recently, many proposed methods tend to develop an end-to-end network to directly operate on the raw point cloud, which has shown great power. However, most of these methods abstract local features by equally considering the neighboring points. The features learned may neglect to distinguish contributions of different points especially the edge points and outliers, leading to a coarse classification result especially for boundaries. Moreover, the extracted features are high redundant and intercorrelated with similar categories, posing difficulty in identifying classes sharing similar characteristics especially in complex scenes. Therefore, we propose an adaptive context balancing and feature filtering network (CBF-Net) to tackle the aforementioned problems. First, we introduce a balanced context encoding module to balance semantically the features of neighboring points, which can help the model learn more from the edge points and, therefore, contribute to a finer classification. Then, considering that the interference for similar classes probably causes confusion among them, a filtered feature aggregating module is proposed to filter the extracted features by mapping them into a cleaner subspace with a lower rank. We have conducted thorough experiments on the International Society for Photogrammetry and Remote Sensing 3-D labeling dataset. Experimental results show that our CBF-Net can obtain high accuracy and achieve state-of-the-art level in the categories of Powerline, Car, and Facade. In addition, we also conduct experiments on the RueMonge2014 dataset, which further reveals the strong ability of our model.

Highlights

  • B ENEFITING from the improvement of 3D data acquisition technologies, it becomes more and more convenient and economical to obtain 3D data through 3D vision scanners, such as Light Detection and Ranging (LiDAR), laser scanners, Red Green Blue-Depth (RGB-D) cameras etc

  • We propose a balanced context encoding (BCE) module to adaptively balance local features, which effectively alleviate the coarse classification in boundaries

  • We introduce an end-to-end framework for efficient point cloud classification

Read more

Summary

Introduction

B ENEFITING from the improvement of 3D data acquisition technologies, it becomes more and more convenient and economical to obtain 3D data through 3D vision scanners, such as Light Detection and Ranging (LiDAR), laser scanners, Red Green Blue-Depth (RGB-D) cameras etc. The classification of point cloud, called point cloud semantic segmentation in the field of computer vision, is to assign a semantic label for each point according to its relevant features. To address this problem, traditional methods rely heavily on hand-crafted features and classifiers in the field of machine learning [5]. Traditional methods rely heavily on hand-crafted features and classifiers in the field of machine learning [5] They first design various types of task-specific hand-crafted features such as waveform features [6,7,8], eigenvalue features [9], echo-based features [10] etc. For point cloud that holds huge-scale variation among different categories, it becomes harder to make correct classifications using traditional methods

Objectives
Methods
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.