Abstract
In many practical applications, stability with respect to part of the system’s states is often necessary with finite-time convergence to the equilibrium state of interest. Finite-time partial stability involves dynamical systems whose part of the trajectory converges to an equilibrium state in finite time. In this paper, we address finite-time partial stability in probability and uniform finite-time partial stability in probability for nonlinear stochastic dynamical systems. Specifically, we provide Lyapunov conditions involving a Lyapunov function that is positive definite and decrescent with respect to part of the system state and satisfies a differential inequality involving fractional powers for guaranteeing finite-time partial stability in probability. In addition, we show that finite-time partial stability in probability leads to uniqueness of solutions in forward time and we establish necessary and sufficient conditions for almost sure continuity of the settling-time operator of the nonlinear stochastic dynamical system. Finally, we develop a unified framework to address the problem of optimal nonlinear analysis and feedback control design for finite-time partial stochastic stability and finite-time, partial-state stochastic stabilization. Finite-time partial stability in probability of the closed-loop nonlinear system is guaranteed by means of a Lyapunov function that is positive definite and decrescent with respect to part of the system state and can clearly be seen to be the solution to the steady-state form of the stochastic Hamilton–Jacobi–Bellman equation guaranteeing both finite-time, partial-state stability and optimality. The overall framework provides the foundation for extending stochastic optimal linear–quadratic controller synthesis to nonlinear–nonquadratic optimal finite-time, partial-state stochastic stabilization.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have