Abstract

Safety-critical autonomous systems like self-driving cars and fully autonomous industrial robots will integrate a number of complex technologies, each contributing to the residual failure potential of the overall system. For the validation of such systems it would be ideal to use a combination of design principles amenable to validation by formal methods and quantification of residual error probabilities. This goal, however may not always be fully achievable. We conclude that a risk assessment must be aware of those contributing factors that cannot be quantified in an analytic way, including possibilities for novel ways of malicious manipulation and unforeseen effects of interaction with the environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call