Abstract

This paper investigates the deep reinforcement learning based secure control problem for cyber–physical systems (CPS) under false data injection attacks. We describe the CPS under attacks as a Markov decision process (MDP), based on which the secure controller design for CPS under attacks is formulated as an action policy learning using data. Rendering the soft actor–critic learning algorithm, a Lyapunov-based soft actor–critic learning algorithm is proposed to offline train a secure policy for CPS under attacks. Different from the existing results, not only the convergence of the learning algorithm but the stability of the system using the learned policy is proved, which is quite important for security and stability-critical applications. Finally, both a satellite attitude control system and a robot arm system are used to show the effectiveness of the proposed scheme, and comparisons between the proposed learning algorithm and the classical PD controller are also provided to demonstrate the advantages of the control algorithm designed in this paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call