Abstract

With the continuous progress and development of robotics, mobile robots have been widely used in many different fields. In the power grid, mobile robots are used to inspect electrical equipment, which greatly reduce the investment of manpower and material resources. However, in many cases, mobile robots need to work in a constantly changing and complex environment. Because they cannot obtain environmental information in time, it is often difficult to make path planning. In response to this problem, this paper proposes a path planning method for mobile robots based on improved reinforcement learning. This method establishes a grid environment model, defines the return value through the number of steps of the robot, and then proposes a changing action selection strategy for the balance between the robot’s exploration and utilization of the environment in reinforcement learning, so that the exploration factor dynamically change with the increase of the robot’s exploration degree of the environment, thus speeding up the convergence speed of the learning algorithm. Simulation results show that this method can realize autonomous navigation and path planning of mobile robots in complex environments. Compared with traditional algorithms, it greatly reduces the number of iterations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call