Abstract

We are concerned with numerical methods for infinite-time horizon risk sensitive stochastic control problems. Let x/sub t/ be a controlled Markov process in R/sup n/, governed by a given stochastic differential equation. The controller can observe x/sub t/ (state feedback). In risk sensitive control, exponential running cost criteria are considered. A measure of risk sensitivity is fixed. On a finite time horizon, consider the optimal cost function. Under suitable assumptions, the Isaacs partial differential equation (PDE) for a stochastic differential game, with average cost per unit time payoff criterion, is satisfied. In this game, the minimizing player chooses control u/sub t/. The maximizing player, corresponding to unfriendly nature chooses a control v/sub t/. The exponential LQR (LEQR) problem leads to a deterministic differential game, which is the same game which arises in state space robust control formulations of disturbance attenuation problems. LEQR problems reduce to solving matrix Riccati equations. This technique is no longer available for nonlinear dynamics or nonquadratic cases. Instead, we resort to numerical solution of the PDE. An heuristic is used. >

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call