Abstract

This paper is concerned with the controllability and observability of discrete-time linear systems that possess randomly jumping parameters described by finite-stale Markov processes, and the relationship between these properties and the solution of the infinite time jump linear quadratic (JLQ) optimal control problem. The solution of the markovian JLQ problem with finite or infinite time horizons is known. Necessary and sufficient conditions for the existence of optimal constant control laws that lead to finite optimal expected costs as the time horizon becomes infinite are also known. Sufficient conditions for these steady-state control laws to stabilize the controlled system are also available (Chizeck et al. 1986). These conditions are not easy to test, however. Various definitions of controllability and observability for stochastic systems exist in the literature. These definitions are unfortunately not related to the steady-state JLQ control problem in a manner that is analogous to the role of deterministic controllability and observability in the linear quadratic optimal control problem. In this paper, new and refined definitions of the controllability and observability of jump linear systems are developed. These conditions have relatively simple algebraic tests. More importantly, these controllability and observability conditions can be used to determine the existence of finite steady-state JLQ solutions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.