Abstract

In intensive obstacle environment, the available flying space is narrow, which makes it difficult to generate feasible path for UAVs within limited runtime. In this paper, a Q-learning-based planning algorithm is presented to improve the efficiency of single UAV path planning in intensive obstacle environment. By constructing the space-action state offline learning planning architecture, the proposed method realizes the rapid path planning of UAV, and solves the high time-consuming problem of reinforcement learning online path planning. Considering the time-consuming problem of Q-table re-training, a probabilistic local update mechanism is proposed by updating the Q-value of the states to reduce the high time-consuming of Q-table re-raining and realize the rapid update of Q-table. The probability of Q-value updating is up to the distance to the new obstacle. The closer the state is to the new obstacle, the higher its probability of re-training. Therefore, the flight trajectory can be quickly re-planned when the environment changes. Simulation results show that the proposed Q-learning-based planning algorithm can generate path for UAV from random start position and avoid the obstacles. Compared with the classical A* algorithm, the path planning time based on the trained Q table can be reduced from second to millisecond, which significantly improves the efficiency of path planning.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.