Abstract

We study the problem of reinforcement learning (RL) using as few real-world samples as possible. A naive application of RL can be inefficient in large and continuous-state spaces. We present two versions of multifidelity RL (MFRL), model based and model free, that leverage Gaussian processes (GPs) to learn the optimal policy in a real-world environment. In the MFRL framework, an agent uses multiple simulators of the real environment to perform actions. With increasing fidelity in a simulator chain, the number of samples used in successively higher simulators can be reduced. By incorporating GPs in the MFRL framework, we empirically observe an up to 40% reduction in the number of samples for model-based RL and 60% reduction for the model-free version. We examine the performance of our algorithms through simulations and realworld experiments for navigation with a ground robot.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call