Abstract

Control design for stochastic systems is traditionally based on the optimization of the expected value of a suitably chosen loss function. This well elaborated and understood task is practically restricted by the computational complexity of the related dynamic programming equations or its equivalents. For this reason, it is worthwhile to search for an alternative formulation which would lead to a more tractable design. Here, such an alternative is presented which leads to a simpler form of design equations. A way is open to a systematic approximation of the optimizing control design. The controller is designed in such a way that Kullback-Leibler distance between probabilistic description of the closed loop and the required description is minimized. It leads to explicit form of a randomized optimal controller which depends on a solution of a functional equation with a simpler structure than general dynamic programming equations. The basic paradigm is proposed and the resulting algorithm is discussed. For illustration purposes, it is applied to linear Gaussian systems. The desirable result is obtained: the optimal controller is determined by a discrete time Riccati equation. Less trivial applications will be treated elsewhere.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call