Abstract

AbstractWhen deploying mobile robots in real‐world scenarios, such as airports, train stations, hospitals, and schools, collisions with pedestrians are intolerable and catastrophic. Motion safety becomes one of the most fundamental requirements for mobile robots. However, until now, efficient and safe robot navigation in such dynamic environments is still an open problem. The critical reason is that the inconsistency between navigation efficiency and motion safety is greatly intensified by the high dynamics and uncertainties of pedestrians. To face the challenge, this paper proposes a safe deep reinforcement learning algorithm named Conflict‐Averse Safe Reinforcement Learning (CASRL) for autonomous robot navigation in dynamic environments. Specifically, it first separates the collision avoidance sub‐task from the overall navigation task and maintains a safety critic to evaluate the safety/risk of actions. Later, it constructs two task‐specific but model‐agnostic policy gradients for goal‐reaching and collision avoidance sub‐tasks to eliminate their mutual interference. Then, it further performs a conflict‐averse gradient manipulation to address the inconsistency between two sub‐tasks. Finally, extensive experiments are performed to evaluate the superiority of CASRL. Simulation results show an average 8.2% performance improvement over the vanilla baseline in eight groups of dynamic environments, which is further extended to 13.4% in the most challenging group. Besides, forty real‐world experiments fully illustrated that the CASRL could be successfully deployed on a real robot.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call