Abstract

Light detection and ranging (LiDAR) sensor is attracting significant attention in the field of smart transportation because of its ability to give depth information. It is already extensively used for obstacle detection. The inertial navigation system (INS) with a global navigation satellite system (GNSS) gives the global position, orientation, and velocity of a given object in real time. In autonomous driving, LiDAR point clouds are segmented to find out the positions of various objects in the surrounding of the vehicle of interest. However, the obstacle position is determined in local east–north–up (ENU) coordinates. In this article, an end-to-end framework has been presented, which takes input from LiDAR and INS/GNSS systems and gives the obstacle position in universal coordinates (latitude, longitude) in real time. The proposed framework includes ground point removal from raw LiDAR data, obstacle segmentation, and LiDAR data fusion with INS/GNSS data for georeferencing. For LiDAR data processing, a novel method for removing ground points has been presented, in which the entire point cloud is divided into square grids in the horizontal plane. Based on the statistics of the vertical distribution of the points in each grid, ground points are identified. Two ground point datasets were also created for testing the proposed algorithm. These datasets contain point clouds of ground with varying inclination from various multichannel LiDARs. The proposed ground removal method was tested on the Paris-Lille-3D dataset and the dataset created. An F1-score of greater than 0.99 and 0.98 was achieved on our dataset and the Paris Lille-3D dataset, respectively. The framework is tested on different hardware configurations and found to be suitable for real-time applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call