Abstract

Path planning is an indispensable component in guiding unmanned ground vehicles (UGVs) from their initial positions to designated destinations, aiming to determine trajectories that are either optimal or near-optimal. While conventional path-planning techniques have been employed for this purpose, planners utilizing reinforcement learning (RL) exhibit superior adaptability within exceedingly complex and dynamic environments. Nevertheless, existing RL-based path planners encounter several shortcomings, notably, redundant map representations, inadequate feature extraction, and limited adaptiveness across diverse environments. In response to these challenges, this paper proposes an innovative and highly self-adaptive path-planning approach based on Transformer encoder feature extraction coupled with incremental reinforcement learning (IRL). Initially, an autoencoder is utilized to compress redundant map representations, providing the planner with sufficient environmental data while minimizing dimensional complexity. Subsequently, the Transformer encoder, renowned for its capacity to analyze global long-range dependencies, is employed to capture intricate correlations among UGV statuses at continuous intervals. Finally, IRL is harnessed to enhance the path planner’s generalization capabilities, particularly when the trained agent is deployed in environments distinct from its training counterparts. Our empirical findings demonstrate that the proposed method outperforms traditional uniform-sampling-based approaches in terms of execution time, path length, and trajectory smoothness. Furthermore, it exhibits a fivefold increase in adaptivity compared to conventional transfer-learning-based fine-tuning methodologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call