Abstract

Nowadays, Machine Learning (ML) algorithms are being incorporated into many systems since they can learn and solve complex problems. Some of these systems can be considered as Safety-Critical Systems (SCS), therefore, the performance of ML algorithms should be sufficiently safe concerning the safety requirements of the incorporating SCS. However, the performance analysis of ML algorithms, usually, relies on metrics that were not developed with safety in mind. Accordingly, they may not be appropriate for assessing the performance of ML algorithms concerning safety. This paper debates on accounting for the distribution - not just the amount - of False Negatives as an additional element to be used when assessing ML algorithms to be integrated into SCS. We empirically try to assess the properness of incorporating ML-based components (anomaly-based intrusion detectors) into SCS using both traditional and novel SSPr and NPr metrics that focus on the numbers as well as the distribution of False Negatives. Results obtained by our experiment allow discussing the potential of ML-based components to be incorporated into SCS.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.