Abstract

ABSTRACT Robots operating in everyday life environments have to adapt their strategy as the environment conditions change. The goal of this paper is to develop robots that are able to adapt their strategy based on environmental conditions. In our method, we apply evolutionary computation to find the optimal relation between reinforcement learning parameters and robot performance. The proposed algorithm is evaluated in the simulated environment of the Cyber Rodent (CR) robot, where the robot has to increase its energy level by capturing the active battery packs. The CR robot lives in two environments with different settings that replace each other four times. The results show that evolution can generate an optimal relation between the robot performance and exploration-exploitation of reinforcement learning, enabling the robot to adapt online its strategy as the environmental conditions change. KEYWORDS: reinforcement learning, evolution, strategy adaptation 1. INTRODUCTION Reinforcement learning (RL) (Sutton & Barto, 1998, Kaelbling et al., 1996) is an efficient learning framework for autonomous robots, in which the robot learns how to behave, from interactions with the environment, without explicit environmental models or teacher signals. Most RL applications, so far, have been constrained to stationary environments. However, in many real-world tasks, the environment is not fixed. Therefore, the robot must change its strategy based on the

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.