Abstract

In this article, we describe an approach to autonomous system construction that not only supports self-awareness but also formal verification. This is based on modular construction where the key autonomous decision making is captured within a symbolically described “agent.” So, this article leads us from traditional systems architectures, via agent-based computing, to explainability, reconfigurability, and verifiability, and on to applications in robotics, autonomous vehicles, and machine ethics. Fundamentally, we consider self-awareness from an agent-based perspective. Agents are an important abstraction capturing autonomy, and we are particularly concerned with intentional, or rational, agents that expose the “intentions” of the autonomous system. Beyond being a useful abstract concept, agents also provide a practical engineering approach for building the core software in autonomous systems such as robots and vehicles. In a modular autonomous system architecture, agents of this form capture important decision making elements. Furthermore, this ability to transparently capture such decision making processes, and especially being able to expose their intentions, within an agent allows us to apply strong (formal) agent verification techniques to these systems.

Highlights

  • Autonomous systems, ranging from robots, unmanned vehicles, “smart” technologies, and on to autonomous software, are increasingly popular

  • An interesting aspect of this concerns the mechanism by which a system changes between these levels; when can the operator/pilot/driver give the system more control, but when can the system relinquish some/all control back to the human? Work on such variable, shared or adjustable autonomy remains of strong relevance to practical systems [14]–[16]

  • The above elements provide a strong set of requirements for self-awareness and introspection. These provide us with a framework to assess how we can design autonomous systems that allow us to implement and expose any, most, or all of the above and, if so, how strongly can we verify these aspects in our system?

Read more

Summary

INTRODUCTION

There are many more examples, across industrial, financial, healthcare, and domestic sectors. Rooted in philosophical views of autonomy [10], the development of autonomous computational systems has been taken up, expanding in the 1980s and 1990s, through control systems [11] and agent-based systems [12]. This has led to a plethora of variations on autonomy, and we can refine the above general definition into further subcategories describing where, and how, decisions are made. An interesting aspect of this concerns the mechanism by which a system changes between these levels; when can the operator/pilot/driver give the system more control, but when can the system relinquish some/all control back to the human? Work on such variable, shared or adjustable autonomy remains of strong relevance to practical systems [14]–[16]

Verification
Self-Awareness
ARCHITECTURES
What Is It “Thinking”?
Why Choose That?
What Can It Do?
How Well Is It Working?
FORMALVERIFICATIONOFR AT IONALAGENTS
Is It Legal?
Are we acting to legal standards?
Is It Ethical?
Awareness of Acceptable Boundaries
EXPLAINABILITY
Can It Explain Itself?
Winfield and Jirotka’s “Ethical Black Box”
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call