Abstract

AbstractAdvances in artificial intelligence and robotics development are providing the technical abilities that will allow autonomous systems to perform complex tasks in uncertain situations. Despite these technical advances, a lack of human trust leads to inefficient system deployment, increases supervision workload and fails to remove humans from harm’s way. Conversely, excessive trust in autonomous systems may lead to increased risks and potentially catastrophic mission failure. In response to this challenge, trusted autonomy is the emerging scientific field aiming at establishing the foundations and framework for developing trusted autonomous systems.This paper investigates the use of modelling and simulation (M&S) to advance research into trusted autonomy. The work focuses on a comprehensive M&S-based synthetic environment to monitor operator inputs and provide outputs in a series of interactive, end-user driven events designed to better understand trust and autonomous systems.As part of this analysis, a suite of prototype model-based planning, simulation and analysis tools have been designed, developed and tested in the first of a series of distributed interactive events. In each of these events, the applied M&S methodologies were assessed for their ability to answer the question; what are the key mechanisms that affect trust in autonomous systems?The potential shown by M&S throughout this work paves the way for a wide range of future applications that can be used to better understand trust in autonomous systems and remove a key barrier to their wide-spread adoption in the future of defense.KeywordsModelling and simulationTrusted autonomyModel based systems engineeringFuture of defense

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call