Abstract

Cyber-Physical Systems (CPS) are increasingly incorporating Learning-Enabled Components (LEC) to implement complex functions. By LEC we mean a component (typically, but not exclusively, implemented in software) that is realized with the help of data-driven techniques, like machine learning. For example, an LEC in an autonomous car can implement a lane follower function such that the developer trains an appropriate convolutional neural network with a stream of images of the road as input and the observed actions of a human driver as output. For high-consequence systems the challenge is to prove that the resulting system is safe: it does no harm, and it is ‘live’: it is functional. Safety is perhaps the foremost problem in autonomous vehicles, especially for ones that operate in a less-regulated environment, like the road network. The traditional approach for proving the safety of systems is based on extensively documented but often informal arguments – that are very hard to apply to CPS with LEC. The talk will focus on a current project that aims at changing this paradigm by introducing (1) formal verification techniques whenever possible (including proving properties of the ‘learned’ component), (2) monitoring technology for assurance to indicate when the LEC is not performing well, and (3) formalizing the safety case argumentation process so that it can be dynamically evaluated. The application target is autonomous vehicles. The goal is to construct a model-based engineering process and a supporting model-based toolchain that can be used for the engineering and systematic assurance of CPS with LECs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call