Abstract

Traditional Reinforcement Learning (RL) approaches are designed to work well in static environments. In many real-world scenarios, the environments are complex and dynamic, in which the performance of traditional RL approaches may drastically degrade. One of the factors which results in the dynamicity and complexity of the environment is a change in the position and number of obstacles. This paper presents a path planning approach for autonomous mobile robots in a complex dynamic indoor environment, where the dynamic pattern of obstacles will not drastically affect the performance of RL models. Two independent modules, collision avoidance without considering the goal position and goal-seeking without considering obstacles avoidance, are trained independently using artificial neural networks and RL to obtain their best control policies. Then, a switching function is used to combine the two trained modules for realizing the obstacle avoidance and global path planning in a complex dynamic indoor environment. Furthermore, this control system is designed with a special focus on the computational and memory requirements of resource-constrained robots. The design was tested in a real-world environment on a mini-robot with constrained resources. Along with the static and dynamic obstacles’ avoidance, this system has the ability to achieve both static and dynamic targets. This control system can also be used to train a robot in the real world using RL when the robot cannot afford to collide. Robot behavior in the real ground shows a very strong correlation with the simulation results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call