Abstract

Unmanned Aerial Vehicles (UAV) can navigate with low risk in obstacle-free environments using ground control stations that plan a series of GPS waypoints as a path to follow. This GPS waypoint navigation does however become dangerous in environments where the GPS signal is faulty or is only present in some places and when the airspace is filled with obstacles. UAV navigation then becomes challenging because the UAV uses other sensors, which in turn generate uncertainty about its localisation and motion systems, especially if the UAV is a low cost platform. Additional uncertainty affects the mission when the UAV goal location is only partially known and can only be discovered by exploring and detecting a target. This navigation problem is established in this research as a Partially-Observable Markov Decision Process (POMDP), so as to produce a policy that maps a set of motion commands to belief states and observations. The policy is calculated and updated on-line while flying with a newly-developed system for UAV Uncertainty-Based Navigation (UBNAV), to navigate in cluttered and GPS-denied environments using observations and executing motion commands instead of waypoints. Experimental results in both simulation and real flight tests show that the UAV finds a path on-line to a region where it can explore and detect a target without colliding with obstacles. UBNAV provides a new method and an enabling technology for scientists to implement and test UAV navigation missions with uncertainty where targets must be detected using on-line POMDP in real flight scenarios.

Highlights

  • Ground, underwater and aerial robots are widely used for environmental monitoring and target detection missions [1,2,3,4,5]

  • We developed Unmanned Aerial Vehicles (UAV) Uncertainty-Based Navigation (UBNAV) for UAVs to navigate in Global Positioning System (GPS)-denied environments by executing a policy that takes into account different sources of uncertainty

  • We implemented and tested two of the fastest on-line Partially-Observable Markov Decision Process (POMDP) algorithms, Partially-Observable Monte Carlo Planning (POMCP) [14] and Adaptive Belief Tree (ABT) [16], in hardware and software in order to test the system for UAV navigation missions

Read more

Summary

Introduction

Underwater and aerial robots are widely used for environmental monitoring and target detection missions [1,2,3,4,5] Among these robots, UAVs use ground control stations to plan a path to a goal before flying, using Global Positioning System (GPS) sensors as their source of localisation [6,7,8,9,10]. We developed UAV Uncertainty-Based Navigation (UBNAV) for UAVs to navigate in GPS-denied environments by executing a policy that takes into account different sources of uncertainty This policy is calculated on-line by a POMDP path planning algorithm. The system uses motion commands instead of waypoints and updates a policy after receiving feedback from a perception module This approach provides ease in modelling the decoupled system dynamics by using time step responses for a set of holonomic actions in four states of the UAV. UBNAV enables researchers to implement and flight test on-line POMDP algorithms for the purpose of UAV navigation in GPS-denied environments or where perception has high degrees of uncertainty

Markov Decision Processes and Partially-Observable Markov Decision Processes
POMCP and ABT
System Architecture
On-Line POMDP Module
Motion Control Module
Observation Module
Problem Description and Formulation
Simulation
Real Flight Tests
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call