Abstract

Computer-based automation of sensing, analysis, memory, decision-making, and control in industrial, business, medical, scientific, and military applications is becoming increasingly sophisticated, employing various techniques of artificial intelligence for learning, pattern recognition, and computation. Research has shown that proper use of automation is highly dependent on operator trust. As a result the topic of trust has become an active subject of research and discussion in the applied disciplines of human factors and human-systems integration. While various papers have pointed to the many factors that influence trust, there currently exists no consensual definition of trust. This paper reviews previous studies of trust in automation with emphasis on its meaning and factors determining subjective assessment of trust and automation trustworthiness (which sometimes but not always are regarded as an objectively measurable properties of the automation). The paper asserts that certain attributes normally associated with human morality can usefully be applied to computer-based automation as it becomes more intelligent and more responsive to its human user. The paper goes on to suggest that the automation, based on its own experience with the user, can develop reciprocal attributes that characterize its own trust of the user and adapt accordingly. This situation can be modeled as a formal game where each of the automation user and the automation (computer) engage one another according to a payoff matrix of utilities (benefits and costs). While this is a concept paper lacking empirical data, it offers hypotheses by which future researchers can test for individual differences in the detailed attributes of trust in automation, and determine criteria for adjusting automation design to best accommodate these user differences.

Highlights

  • In recent years trust in automation has become an active field of research in human factors psychology and human-systems engineering

  • This paper asserts that as automation becomes more “intelligent” users’ trust of automation will increasingly resemble that of trusting another person. This is likely to result in increasingly greater individual differences among human trusters, as well as the differences in computer-based automation itself, the objects of the user trust

  • The descriptions of the automation trustworthiness attributes described above imply that the automation can record its interaction with and model its own trust in the human user

Read more

Summary

INTRODUCTION

In recent years trust in automation has become an active field of research in human factors psychology and human-systems engineering. Lyons et al (2011) conclude from a factor analysis experiment that trust and distrust might be orthogonal properties and are independent from judged validity of trust in automation, what they call “IT suspicion.” Hoff and Bashir (2015) review 101 papers that include 127 studies on trust in automation with the aim of sorting out factors that they categorize with respect to the truster’s disposition, the situation and the aspect of learning They provide a useful taxonomy of design recommendations based on various authors’ findings that include the following design features: appearance, ease-of-use, communication, transparency, and level of control. Trust has been defined in many different ways in the literature, and this paper will try to explicate these ways further, both with regard to the trust vs. trustworthiness aspect and especially with regard to the meaning of trust as computers become more “intelligent,” as defined above

OBJECTIVE
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call