Lidar scanning is a widely used surveying and mapping technique ranging across remote-sensing applications involving topological, and topographical information. Typically, lidar point clouds, unlike images, lack inherent consistent structure and store redundant information thus requiring huge processing time. The Compressive Sensing (CS) framework leverages this property to generate sparse representations and accurately reconstructs the signals from very few linear, non-adaptive measurements. The reconstruction is based on valid assumptions on the following parameters- (1) sampling function governed by sampling ratio for generating samples, and (2) measurement function for sparsely representing the data in a low-dimensional subspace. In our work, we address the following motivating scientific questions- Is it possible to reconstruct dense point cloud data from a few sparse measurements? And, what could be the optimal limit for CS sampling ratio with respect to overall classification metrics? Our work proposes a novel Convolutional Neural Network based deep Compressive Sensing Network (named LidarCSNet) for generating sparse representations using publicly available 3D lidar point clouds of the Philippines. We have performed extensive evaluations for analysing the reconstruction for different sampling ratios {4%, 10%, 25%, 50% and 75%} and we observed that our proposed LidarCSNet reconstructed the 3D lidar point cloud with a maximum PSNR of 54.47 dB for a sampling ratio of 75%. We investigate the efficacy of our novel LidarCSNet framework with 3D airborne lidar point clouds for two domains - forests and urban environment on the basis of Peak Signal to Noise Ratio, Haussdorf distance, Pearson Correlation Coefficient and Kolmogorov-Smirnov Test Statistic as evaluation metrics for 3D reconstruction. The results relevant to forests such as Canopy Height Model and 2D vertical profile are compared with the ground truth to investigate the robustness of the LidarCSNet framework. In the urban environment, we extend our work to propose two novel 3D lidar point cloud classification frameworks, LidarNet and LidarNet++, achieving maximum classification accuracy of 90.6% as compared to other prominent lidar classification frameworks. The improved classification accuracy is attributed to ensemble-based learning on the proposed novel 3D feature stack and justifies the robustness of using our proposed LidarCSNet for near-perfect reconstruction followed by classification. We document our classification results for the original dataset along with the point clouds reconstructed by using LidarCSNet for five different measurement ratios - based on overall accuracy and mean Intersection over Union as evaluation metrics for 3D classification. It is envisaged that our proposed deep network based convolutional sparse coding approach for rapid lidar point cloud processing finds huge potential across vast applications, either as a plug-and-play (reconstruction) framework or as an end-to-end (reconstruction followed by classification) system for scalability.
Read full abstract