Abstract

Recent studies employ advanced deep convolutional neural networks (CNNs) for monocular depth perception, which can hardly run efficiently on small drones that rely on low/middle-grade GPU(e.g. TX2 and 1050Ti) for computation. In addition, the methods which can effectively and efficiently produce probabilistic depth prediction with a measure of model confidence have not been well studied. The lack of such a method could yield erroneous, sometimes fatal, decisions in drone applications (e.g. selecting a waypoint in a region with a large depth yet a low estimation confidence). This paper presents a real-time onboard approach for monocular depth prediction and obstacle avoidance with a lightweight probabilistic CNN (pCNN), which will be ideal for use in a lightweight energy-efficient drone. For each video frame, our pCNN can efficiently predict its depth map and the corresponding confidence. The accuracy of our lightweight pCNN is greatly boosted by integrating sparse depth estimation from a visual odometry into the network for guiding dense depth and confidence inference. The estimated depth map is transformed into Ego Dynamic Space (EDS) by embedding both dynamic motion constraints of a drone and the confidence values into the spatial depth map. Traversable waypoints are automatically computed in EDS based on which appropriate control inputs for the drone are produced. Extensive experimental results on public datasets demonstrate that our depth prediction method runs at 12Hz and 45Hz on TX2 and 1050Ti GPU respectively, which is 1.8X~5.6X faster than the state-of-the-art methods and achieves better depth estimation accuracy. We also conducted experiments of obstacle avoidance in both simulated and real environments to demonstrate the superiority of our method to the baseline methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call