Abstract

Artificial intelligence-based (a.k.a. AI-based) controllers have received significant attentions in the past few years due to their broad applications in cyber-physical systems (CPSs) to accomplish complex control missions. However, guaranteeing safety and reliability of CPSs equipped with this kind of (uncertified) controllers is currently very challenging, which is of vital importance in many real-life safety-critical applications. To cope with this difficulty, we propose a Safe-visor architecture for sandboxing AI-based controllers in stochastic CPSs. The proposed framework contains (i) a history-based supervisor which checks inputs from the AI-based controller and makes compromise between functionality and safety of the system, and (ii) a safety advisor that provides fallback when the AI-based controller endangers the safety of the system. By employing this architecture, we provide formal probabilistic guarantees on the satisfaction of those classes of safety specifications which can be represented by the accepting languages of deterministic finite automata (DFA), while AI-based controllers can still be employed in the control loop even though they are not reliable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call