Abstract

As the core technology in the field of mobile robots, the development of robot obstacle avoidance technology substantially enhances the running stability of robots. Built on path planning or guidance, most existing obstacle avoidance methods underperform with low efficiency in complicated and unpredictable environments. In this paper, we propose an obstacle avoidance method with a hierarchical controller based on deep reinforcement learning, which can realize more efficient adaptive obstacle avoidance without path planning. The controller, with multiple neural networks, contains an action selector and an action runner consisting of two neural network strategies and two single actions. Action selectors and each neural network strategy are separately trained in a simulation environment before being deployed on a robot. We validated the method on wheeled robots. More than 200 tests yield a success rate of up to 90%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call