Abstract

This work develops a deep learning-based autonomous Landing Zone (LZ) identification module for a Vertical TakeOff and Landing (VTOL) drone using colored Light Detection and Ranging (LiDAR) point cloud data. "ConvPoint", a top-performing neural network (NN) architecture of the Semantic3D.net pointcloud segmentation benchmark leaderboard, was chosen as the reference architecture for the development. A classification method based on the terrain geometry characteristics is used for automatic labeling of the datasets followed by manual adjustment of label through visual observation. The automatic labelling method selected is a state-of-the-art LZ detection method reported in literature which also serves as the baseline for comparative evaluation. Point clouds captured by the Intelligent Systems Laboratory (ISL), Memorial University of Newfoundland (MUN) and online point cloud datasets were used to perform network training and comparative evaluation of the methods. The results signify the enhanced capability of deep learning based methods on handling both geometry and color information for LZ estimation, and the ability to perform LZ estimations making use of hardware accelerator modules. The deep learning base methods were capable of achieving accuracies up to 94% for datasets that contain water bodies where the classical approach had poor predictive capability due to the reliance on only geometric information. The proposed LZ detection algorithm was run on a reconfigurable hardware-accelerated module to evaluate the real-time feasibility of the approach which currently is capable of 10238 points per second processing speed on Jetson AGX Xavier dedicated hardware

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call