Abstract

This paper presents a direct minimum-variance (MV) data-driven safe control design approach for uncertain linear discrete-time stochastic systems. The superiority of the direct MV approach is shown by developing and comparing direct versus indirect learning approaches and MV versus certainty-equivalence (CE) approaches. First, it is shown that probabilistic safety can be guaranteed by ensuring probabilistic λ-contractivity of the safe set. Four data-based convex optimization-based algorithms are introduced to ensure the probabilistic λ-contractivity of the safe set (i.e., direct and indirect CE, direct and indirect MV). It is shown that while the CE approach results in a risk-neutral control design method with no robustness guarantees, the MV approach results in a risk-averse control design with probabilistic safety guarantees. This is because MV approach aims at learning a control gain that minimizes the variance of the state of the closed-loop system with respect to the safe set, and thus minimizes the risk of safety violation. Besides, it is shown that the direct learning approach requires weaker data richness conditions (i.e., lower sample complexity) than the indirect learning approach. Two simulation examples are provided to verify that the direct MV learning approach outperforms the other three approaches since it leads to low-complexity (i.e., low sample complexity and convex optimization) safe learning with a high probability of safety guarantees.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call