Abstract

Reinforcement learning is an active research area in the fields of artificial intelligence and machine learning, with applications in control. The most important feature of reinforcement learning is its ability to learn without prior knowledge about the system. However, in the real world, reinforcement learning actions may lead to serious damage of a controlled robot or its surroundings in the absence of any prior knowledge. Safety — an often neglected factor in the reinforcement learning community — requires greater attention from researchers. Prior knowledge can increase safety during learning. At the same time, it can severely limit a possible solution set and hamper learning performance. This thesis discusses the influence of different forms of prior knowledge on learning performance and the risk to robot damage, where prior knowledge ranges from physics-based assumptions, such as the robot construction and material properties, to the knowledge of the task curriculum, or the approximate model possibly coupled with a nominal controller.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call