Abstract

This paper explores the feasibility of a framework for vision-based obstacle avoidance techniques that can be applied to unmanned aerial vehicles, where such decision-making policies are trained upon supervision of actual human flight data. The neural networks are trained based on aggregated flight data from human experts, learning the implicit policy for visual obstacle avoidance by extracting the necessary features within the image. The images and flight data are collected from a simulated environment provided by Gazebo, and Robot Operating System is used to provide the communication nodes for the framework. The framework is tested and validated in various environments with respect to four types of neural network including fully connected neural networks, two- and three-dimensional convolutional neural networks (CNNs), and recurrent neural networks (RNNs). Among the networks, sequential neural networks (i.e., 3D-CNNs and RNNs) provide the better performance due to its ability to explicitly consider the dynamic nature of the obstacle avoidance problem.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.