Abstract

Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feyman-Kac path integral and can be estimated using Monte Carlo sampling. In this contribution we review path integral control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the Path Integral Cross Entropy method or PICE. We illustrate this method for some simple examples. The path integral control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the path integral control method as an accurate alternative to particle filtering.

Highlights

  • Stochastic optimal control theory (SOC) considers the problem to compute an optimal sequence of actions to attain a future goal

  • In [13] it was observed that posterior inference in a certain class of diffusion processes can be mapped onto a stochastic optimal control problem

  • ∂ θn u with η > 0 a small parameter. This gradient descent procedure converges to a local minimum of the KL divergence Eq 23, using standard arguments. We refer to this gradient method as the path integral cross entropy method or PICE

Read more

Summary

Introduction

Stochastic optimal control theory (SOC) considers the problem to compute an optimal sequence of actions to attain a future goal. In [13] it was observed that posterior inference in a certain class of diffusion processes can be mapped onto a stochastic optimal control problem These so-called path integral (PI) control problems [20] represent a restricted class of non-linear control problems with arbitrary dynamics and state cost, but with a linear dependence of the control on the dynamics and quadratic control cost. For this class of control problems, the Bellman equation can be transformed into a linear partial differential equation. Within the robotics and control community, there are several approaches to deal with this problem

Deterministic Control and Local Linearisation
Model Predictive Control
Reinforcement Learning
Outline
Path Integral Control
The Linear HJB Equation
Proof of the Theorem
Monte Carlo Sampling
The Cross-Entropy Method
The Kullback–Leibler Formulation of the Path Integral Control Problem
Numerical Illustration
Bayesian System Identification
Findings
Summary and Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call