In this paper, we develop a unified framework to address the problem of optimal nonlinear analysis and feedback control for partial stability and partial-state stabilization of stochastic dynamical systems. Partial asymptotic stability in probability of the closed-loop nonlinear system is guaranteed by means of a Lyapunov function that is positive definite and decrescent with respect to part of the system state which can clearly be seen to be the solution to the steady-state form of the stochastic Hamilton–Jacobi–Bellman equation, and hence, guaranteeing both partial stability in probability and optimality. The overall framework provides the foundation for extending optimal linear-quadratic stochastic controller synthesis to nonlinear-nonquadratic optimal partial-state stochastic stabilization. Connections to optimal linear and nonlinear regulation for linear and nonlinear time-varying stochastic systems with quadratic and nonlinear-nonquadratic cost functionals are also provided. Finally, we also develop optimal feedback controllers for affine stochastic nonlinear systems using an inverse optimality framework tailored to the partial-state stochastic stabilization problem and use this result to address polynomial and multilinear forms in the performance criterion.
Read full abstract