Abstract

This paper presents a deep reinforcement learning-based system for goal-oriented mapless navigation for Unmanned Aerial Vehicles (UAVs). In this context, image-based sensing approaches are the most common. However, they demand high processing power hardware which are heavy and difficult to embed into a small-autonomous UAV. Our approach is based on localization data and simple sparse range data to train the intelligent agent. We based our approach in two state-of-the-art Deep- Rl techniques for terrestrial robot: Deep Deterministic Policy Gradient (DDPG) and Soft Actor Critic (SAC). We compare the performance with a classic geometric-based tracking controller for mapless navigation of UAVs. Based on experimental results, we conclude that Deep- Rl algorithms are effective to perform mapless navigation and obstacle avoidance for UAVs. Our vehicle successfully performed two proposed tasks, reaching the desired goal and outperforming the geometric-based tracking controller on the obstacle avoiding capability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.