Abstract

This work describes a deep learning-based autonomous landing zone identification module for a vertical takeoff and landing vehicle. The proposed module is developed using LiDAR point cloud data and can be integrated into a visual LiDAR odometry and mapping pipeline implemented in the vehicle. “ConvPoint,” the top-performing neural network architecture in an online point cloud segmentation benchmark leaderboard at the time of writing, was chosen as the reference architecture. Semantic labeling of the datasets was done using the terrain geometry characteristics and manual adjustment of labels through visual observation. Point clouds captured by the Memorial University and online point cloud datasets were used to transfer-learn the neural network model and to evaluate the accuracy-runtime trade-off for the proposed pipeline. The selected neural network model generated accuracy values of 89.7% and 92.1% on two selected datasets, while it computed 3940.15 points per second and 3633.85 points per second to predict landing zone labels, respectively. Hyperparameter tuning was carried out to obtain a higher throughput with an update rate of 1 Hz for the landing zone map of the point cloud inputs from the visual LiDAR odometry and mapping pipeline. The proposed system is validated by evaluating its performance on three variations of point clouds. The results validate the accuracy-runtime trade-off of the proposed system and show that further optimization can improve performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call