Abstract
A computational system is called autonomous if it is able to make its own decisions, or take its own actions, without human supervision or control. The capability and spread of such systems have reached the point where they are beginning to touch much of everyday life. However, regulators grapple with how to deal with autonomous systems, for example how could we certify an Unmanned Aerial System for autonomous use in civilian airspace? We here analyse what is needed in order to provide verified reliable behaviour of an autonomous system, analyse what can be done as the state-of-the-art in automated verification, and propose a roadmap towards developing regulatory guidelines, including articulating challenges to researchers, to engineers, and to regulators. Case studies in seven distinct domains illustrate the article.
Highlights
Since the dawn of human history, humans have designed, implemented and adopted tools to make it easier to perform tasks, often improving efficiency, safety, or security
We note the breadth of work tackling formal methods for robotic systems [178] and would expect this to impact upon regulation and certification in the future, to make them aligned with the developments in the autonomous systems area
How can we extend both formal specification and formal verification to effectively reason about evolving knowledge? Extending specification and verification techniques like model checking to logics like Temporal Epistemic Logic, which reasons about the evolution of knowledge over time, is an challenging research area [54]
Summary
Since the dawn of human history, humans have designed, implemented and adopted tools to make it easier to perform tasks, often improving efficiency, safety, or security. Recent studies show a direct relationship between increasing technological complexity, cognitive evolution, and cultural variation [231]. When such tools were simple, the person using the tool had full control over the way the tool should be operated, understood why it worked in that way, knew how the tool should be used to comply with existing rules, and when such rules might be broken if the situation demanded an exceptional use of the tool. Even if we are domain experts, we barely know the complete event/data flow initiated by just pressing one button This is even more true with the rise of auto-* and self-* systems (auto-pilots, selfdriving cars, self-configuring industrial equipment, etc). Due to the delegation of more and more capabilities from humans to machines, the scenario depicted in Fig. 4 – where the human is replaced by an
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.