Abstract

Safety and security co-engineering is one of the latest challenge in autonomous vehicle (AV) development. Efficiently integrating safety and security requirements during co-engineering is a new issue. Most functional safety and security analysis methods do not directly derive safety requirements, and improper handling of their relationship can affect system design and timelines. This article aims to use large language models (LLMs) to assist in the collaborative work of functional safety and security analysis. The main contributions are as follows: First, we propose three types of formulations to summarize hazard scenarios and threat scenarios and use LLMs to extract functional safety requirements and security requirements from them. Second, we utilized the three LLMs to perform relationship checks on the extracted functional safety requirements and security requirements. The results showed that the majority of the checks were correct and consistent, with only a small portion requiring manual intervention, significantly reducing human labor. Through these methods, we demonstrate the potential and efficiency of LLMs in the collaborative analysis of functional safety and security.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.