Abstract

AbstractApproaches based on Machine Learning (ML) provide novel and promising solutions to implement safety-critical functions in the field of autonomous driving. Establishing assurance in these ML components through safety requirements is critical, as the failure of these components may lead to hazardous events such as pedestrians being hit by the ego vehicle due to an erroneous output of an ML component (e.g., a pedestrian not being detected in a safety-critical region). In this paper, we present our experience with applying the System-Theoretic Process Analysis (STPA) approach for an ML-based perception component within a pedestrian collision avoidance system. STPA is integrated into the safety life cycle of functional safety (regulated by ISO 26262) complemented with safety of the intended functionality (regulated by ISO/FDIS 21448) in order to elicit safety requirements. These requirements are derived from STPA unsafe control actions and loss scenarios, thus enabling the traceability from hazards to ML safety requirements. For specifying loss scenarios, we propose to refer to erroneous outputs of the ML component due to the ML functional insufficiencies, while adhering to the guidelines of the STPA handbook.KeywordsSafety requirementsMachine LearningFunctional insufficienciesSTPAISO 26262ISO/FDIS 21448

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call