Abstract

The safety assessment of cyber-physical systems (CPSs) requires tremendous effort, as the complexity of cyber-physical systems is increasing. A well-known approach for the safety assessment of CPSs is fault injection (FI). The goal of fault injection is to find a catastrophic fault that can cause the system to fail by injecting faults into it. These catastrophic faults are less likely to occur, and finding them requires tremendous labor and cost. In this study, we propose a reinforcement learning (RL)-based method to automatically configure faults in the system under test and to find catastrophic faults in the early stage of system development at the model level. The proposed method provides a guideline to utilize high-level domain knowledge about a system model for constructing the reinforcement learning agent and fault injection setup. In this study, we used the system (safety) specification to shape the reward function in the reinforcement learning agent. The reinforcement learning agent dynamically interacted with the model under test to identify catastrophic faults. We compared the proposed method with random-based fault injection in two case studies using MATLAB/Simulink. Our proposed method outperformed random-based fault injection in terms of the severity and number of faults found.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call