Non-learning based motion and path planning of an Unmanned Aerial Vehicle (UAV) is faced with low computation efficiency, mapping memory occupation and local optimization problems. This article investigates the challenge of quadrotor control using offline reinforcement learning. By establishing a data-driven learning paradigm that operates without real-environment interaction, the proposed workflow offers a safer approach than traditional reinforcement learning, making it particularly suited for UAV control in industrial scenarios. The introduced algorithm evaluates dataset uncertainty and employs a pessimistic estimation to foster offline deep reinforcement learning. Experiments highlight the algorithm’s superiority over traditional online reinforcement learning methods, especially when learning from offline datasets. Furthermore, the article emphasizes the importance of a more general behavior policy. In evaluations, the trained policy demonstrated versatility by adeptly navigating diverse obstacles, underscoring its real-world applicability.
Read full abstract