Abstract
Our research goal is to design an agent that can begin with low-level sensors and effectors and autonomously learn high-level representations and actions through interaction with the environment. This chapter focuses on the problem of learning representations. We present four principles for autonomous learning of representations in a developing agent, and we demonstrate how these principles can be embodied in an algorithm. In a simulated environment with realistic physics, we show that an agent can use these principles to autonomously learn useful representations and effective hierarchical actions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have