Abstract

This paper introduces a method to control a class of jump Markov linear systems with uncertain initialization of the continuous state and affected by disturbances. Both types of uncertainties are modeled as stochastic processes with arbitrarily chosen probability distributions, for which however, the expected values and (co-)variances are known. The paper elaborates on the control task of steering the uncertain system into a target set by use of continuous controls, while chance constraints have to be satisfied for all possible state sequences of the Markov chain. The proposed approach uses a stochastic model predictive control approach on moving finite-time horizons with tailored constraints to achieve the control goal with prescribed confidence. Key steps of the procedure are (i) to over-approximate probabilistic reachable sets by use of the Chebyshev inequality, and (ii) to embed a tightened version of the original constraints into the optimization problem, in order to obtain a control strategy satisfying the specifications. Convergence of the probabilistic reachable sets is attained by suitable bounding of the state covariance matrices for arbitrary Markov chain sequences. The paper presents the main steps of the solution approach, discusses its properties, and illustrates the principle for a numeric example.

Highlights

  • For some systems to be controlled, the dynamics can only be represented with inherent uncertainty, stemming from disturbance quantities, and from uncertain operating modes leading to different parameterization or even varying topology

  • In order to solve the stated control problem, this paper proposes a new method which combines concepts of stochastic model predictive control (SMPC), the conservative approximation of the distributions Gx and Gw and the computation of stochastic reachable set of the jump Markov linear systems (JMLS)

  • This principle allows for a control strategy, in which the expected state is steered towards the target set by a solving a deterministic optimization problem within the MPC part, as was proposed in (Tonne et al, 2015)

Read more

Summary

INTRODUCTION

For some systems to be controlled, the dynamics can only be represented with inherent uncertainty, stemming from disturbance quantities, and from uncertain operating modes leading to different parameterization or even varying topology. The constraints are formulated such that the distribution of the stochastic variables has to satisfy the constraints with a high confidence, termed chance constraints Some papers following this principle use particle filter control (Blackmore et al, 2007; Blackmore et al, 2010; Farina et al, 2016; Margellos, 2016), in which a finite set of samples is used to approximate the probability distributions of the JMLS. These approaches are formulated in form of mixedinteger linear programming problems. A numerical example for illustration is in Section 4, before Section 5 concludes the paper

Notation
The Control Task
PROBABILISTIC CONTROL OF JUMP MARKOV AFFINE SYSTEMS
Structure of the Control Strategy
Construction of PRS Using the Chebyshev Inequality
E Y2i i1
Bounding the State Covariance Matrix
Computation of the Tightening Constraints and MPC Formulation
Prediction Equations for the Expected
NUMERICAL EXAMPLE
CONCLUSION
DATA AVAILABILITY STATEMENT

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.