Abstract
As complex autonomous systems become increasingly ubiquitous, their deployment and integration into our daily lives will become a significant endeavor. Human–machine trust relationship is now acknowledged as one of the primary aspects that characterize a successful integration. In the context of human–machine interaction (HMI), proper use of machines and autonomous systems depends both on the human and machine counterparts. On one hand, it depends on how well the human relies on the machine regarding the situation or task at hand based on willingness and experience. On the other hand, it depends on how well the machine carries out the task and how well it conveys important information on how the job is done. Furthermore, proper calibration of trust for effective HMI requires the factors affecting trust to be properly accounted for and their relative importance to be rightly quantified. In this article, the functional understanding of human–machine trust is viewed from two perspectives—human-centric and machine- centric. The human aspect of the discussion outlines factors, scales, and approaches, which are available to measure and calibrate human trust. The discussion on the machine aspect spans trustworthy artificial intelligence, built-in machine assurances, and ethical frameworks of trustworthy machines.
Highlights
A S AUTONOMOUS systems become increasingly complex, the interaction between these systems and human users/operators relies heavily on how much and how well the users/operators trust them
It is important to discuss a definition of trust and autonomy that relates to the majority of the works reviewed in this article
We start by providing the definition of autonomy and follow up with a definition of trust
Summary
A S AUTONOMOUS systems become increasingly complex, the interaction between these systems and human users/operators relies heavily on how much and how well the users/operators trust them. It is quite difficult to provide a governing definition of autonomy without the situational context of the application, Fisher et al [20] defined autonomous systems as those systems that decide for themselves what to do and when to do it. In explaining this idea, Bradshaw et al [21] emphasized that autonomy entails at least two dimensions: 1) self-directedness, which describes independence of an agent from its physical environment and social context; and 2) self-sufficiency, which describes self-generation of goals by the agent
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have