Abstract

Nowadays, artificial intelligence (AI) is bursting in many fields, including critical ones, giving rise to reliable AI that means ensuring safety of autonomous decisions. As the false negatives may have a safety impact (e.g., in a mobility scenario, prediction of no collision, but collision in reality), the aim is to push them as close to zero as possible, thus designing “safety regions” in the feature space with statistical zero error. We show here how sensitivity analysis of an explainable AI model drives such statistical assurance. We test and compare the proposed algorithms on two different datasets (physical fatigue and vehicle platooning) and achieve quite different conclusions in terms of achievable performance that strongly depend on the level of noise in the dataset rather than on the algorithms at hand.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.