Abstract

Machine Learning approaches have been successfully used for the creation of high-performance control components of cyber–physical systems, where the control dynamics result from the combination of many subsystems. However, these approaches may lack the trustworthiness required to guarantee their reliable application in a safety-critical context. In this paper, we propose a combination of interval arithmetic and theorem-proving verification techniques to analyze safety properties in closed-loop systems that embed neural network components. We show the application of the proposed approach to a model-predictive controller for autonomous driving comparing the neural network verification performance with other existing tools. The results show that open-loop neural network verification through interval arithmetic can outperform existing approaches proving properties with a smaller time overhead. Furthermore, we demonstrate the capability of combining the two approaches to construct a formal model of the network in higher-order logic of the controlled system in a closed-loop.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.