Abstract

This paper deals with a state-dependent learning method of a mobile robot in dynamic and unknown environments. The aim of a mobile robot is to find the optimal path in the task of maze navigation on a grid world. Various types of reinforcement learning methods have been proposed, but it is very difficult to design the granularity (resolution) of states in search space. Therefore, we propose a multi-scale value function to enhance the initial learning of reinforcement learning. First, we compare the performance of temporal difference (TD) learning and Q-learning in dynamic environment. Here we assume several obstacles disappear in the grid world with an existence probability. Several experimental results show the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call