Abstract

This paper addresses the problem of designing a sampled-data state feedback control law for continuous-time Markov jump linear systems (MJLS). The main goal is to characterize the optimal solution of this class of problems in the context of H 2 and performances. The theoretical achievements are based on the direct application of the celebrated Bellman's Principle of Optimality expressed in terms of the dynamic programming equation associated to the time interval corresponding to two successive sampling instants. The design conditions are expressed through Differential Linear Matrix Inequalities (DLMI). The proposed method is simpler than those available in the literature to deal with this kind of systems since it is implemented without the necessity of an iterative algorithm. An example is solved for illustration.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.