Abstract

The need to make sense of complex input data within a vast variety of unpredictable scenarios has been a key driver for the use of machine learning (ML), for example in Automated Driving Systems (ADS). Such systems are usually safety-critical, and therefore they need to be safety assured. In order to consider the results of the safety assurance activities (scoping uncovering previously unknown hazardous scenarios), a continuous approach to arguing safety is required, whilst iteratively improving ML-specific safety-relevant properties, such as robustness and prediction certainty. Such a continuous safety life cycle will only be practical with an efficient and effective approach to analyzing the impact of system changes on the safety case. In this paper, we propose a semi-automated approach for accurately identifying the impact of changes on safety arguments. We focus on arguments that reason about the sufficiency of the data used for the development of ML components. The approach qualitatively and quantitatively analyses the impact of changes in the input space of the considered ML component on other artifacts created during the execution of the safety life cycle, such as datasets and performance requirements and makes recommendations to safety engineers for handling the identified impact. We implement the proposed approach in a model-based safety engineering environment called FASTEN, and we demonstrate its application for an ML-based pedestrian detection component of an ADS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call