Abstract

Abstract Obstacle avoidance based on a monocular camera is a fundamental yet highly challenging task due to the lack of 3D information for a monocular quadrotor. Recent methods based on convolutional neural networks (CNNs) [1] for monocular depth estimation and obstacle detection become increasingly popular due to the considerable advances in deep learning. However, depth estimation by pre-trained CNNs usually suffers from large accuracy degradation for scenes of different types from the training data which are common for obstacle avoidance of drones in unknown environments. In this paper, we present a reactive obstacle avoidance system which employs an online adaptive CNN for progressively improving depth estimation from a monocular camera in unfamiliar environments. Pairs of motion stereo images are collected on-the-fly as training data based on a direct monocular SLAM running in parallel with the CNN. Novel approaches are introduced for selecting highly reliable training samples from noisy data provided by SLAM and efficient online CNN tuning. The depth map computed from the CNN is transformed into Ego Dynamic Space (EDS) by embedding both dynamic motion constraints of a quadrotor and depth estimation errors into the spatial depth map. Traversable waypoints with consideration of the camera’s field of view constraint are automatically computed in EDS based on which appropriate control inputs for the quadcopter are produced. Experimental results on both public datasets, simulated environments and unseen cluttered indoor environments demonstrate the effectiveness of our system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.