Abstract

In this paper, we develop a unified framework to address the problem of optimal nonlinear feedback control for nonlinear stochastic dynamical systems. Specifically, we provide a simplified and tutorial framework for stochastic optimal control and focus on connections between stochastic Lyapunov theory and stochastic Hamilton-Jacobi-Bellman theory. In particular, we show that asymptotic stability in probability of the closed-loop nonlinear system is guaranteed by means of a Lyapunov function which can clearly be seen to be the solution to the steady-state form of the stochastic Hamilton-Jacobi-Bellman equation, and hence, guaranteeing both stochastic stability and optimality. In addition, we develop optimal feedback controllers for affine nonlinear systems using an inverse optimality framework tailored to the stochastic stabilization problem. These results are then used to provide extensions of the nonlinear feedback controllers obtained in the literature that minimize general polynomial and multilinear performance criteria.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call