Abstract

Recently, reinforcement learning attracts attention as the learning technique that is often used on actual robot. As one of problems of reinforcement learning, it is difficult for reinforcement learning to cope with changing purpose, because reinforcement learning depend on reward. Until now, we suggested that we learned to use information does not depend on reward for solving the problem. This information is environmental transition. We defined this information as “Reward-Independent Knowledge (RIK)”. A robot gets RIK and predicts route from initial state to purpose state by using RIK. Reinforcement learning can cope with changing purpose by using RIK. However, it is difficult for RIK to cope with dynamic environment, because RIK is one to one correspondence between state-action pair and next state. Therefore, we suggest that RIK has multiple next state and probability of each possible next state. In this paper, we perform an experiment by simulation. We show that suggested knowledge copes with changing purpose and dynamic environment. In this experiment, we adopt a maze problem which a goal change and changing structure of maze. By this, we will show that suggested knowledge can cope with changing purpose and dynamic environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call