Abstract

Modern model-based reinforcement learning methods for high-dimensional inputs often incorporate an unsupervised learning step for dimensionality reduction. The training objective of these unsupervised learning methods often leverages only static inputs such as reconstructing observations. These representations are combined with predictor functions for simulating rollouts to navigate the environment. We advance this idea by taking advantage of the fact that we navigate dynamic environments with visual stimulus and create a representation that is specifically designed with control and actions in mind. We propose to learn a feature map that is maximally predictable for a predictor function. This results in representations that are well suited for the task of planning, where the predictor is used as a forward model. To this end, we introduce a new way of learning this representation along with the prediction function, a system we dub Latent Representation Prediction Network (LARP). The prediction function is used as a forward model for a search on a graph in a viewpoint-matching task, and the representation learned to maximize predictability is found to outperform other representations. The sample efficiency and overall performance of our approach are shown to rival standard reinforcement learning methods, and our learned representation transfers successfully to unseen environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call