Abstract

Path planning for mobile robots in stochastic, dynamic environments is a difficult problem and the subject of much research in the field of robotics. While many approaches to solving this problem put the computational burden of path planning on the robot, physical path planning methods place this burden on a set of sensor nodes distributed throughout the environment that can communicate information to each other about path costs. Previous approaches to physical path planning have looked at the performance of such networks in regular environments (e.g., office buildings) using highly structured, uniform deployments of networks (e.g., grids). Additionally, these networks do not make use of real experience obtained from the robots they assist in guiding. We extend previous work in this area by incorporating reinforcement learning techniques into these methods and show improved performance in simulated, rough terrain environments. We also show that these networks, which we term SWIRLs (Swarms of Interacting Reinforcement Learners), can perform well with deployment distributions that are not as highly structured as in previous approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call