Abstract

We investigated if an autonomous system can be provided with reasoning that maintains trust between human and system even when human and autonomous system reach discrepant conclusions. Tversky and Kahneman’s research [27] and the vast literature following it distinguishes two modes of human decision making: System 1, which is fast, emotional, and automatic, and System 2 which is slower, more deliberative, and more rational. Autonomous systems are thus far endowed with System 2. So when interacting with such a system, humans may follow System 1 unawares that their autonomous partner follows System 2. This can easily confuse the user when a discrepant decision is reached, eroding their trust in the autonomous system. Hence we investigated if trust in the message could interfere with trust its source, namely the autonomous system. For this we presented participants with images that might or might not be genuine, and found that they often distrusted the image (e.g., as photoshopped) when they distrusted its content. We present a quantum cognitive model that explains this interference. We speculate that enriching an autonomous system with this model will allow it to predict when its decisions may confuse the user, take pro-active steps to prevent this, and with it reinforce and maintain trust in the system.

Highlights

  • Is it not time to work on the second part, where the machine proactively explains its actions from the human’s point of view? Or the part where it can foresee a human error because it knows how a human would reason in a particular case? This stands to hugely help humans put trust in autonomous systems, and in the current presentation we show a direction one could go

  • The experiment is modeled using quantum cognition, laying a foundation for its implementation in future autonomous systems. Incorporating such models can proactively help the user avoid mistakes that are inherent in human judgement and prevent an erosion of trust

  • In the foreseeable future, humans and autonomous systems will engage in shared decision making

Read more

Summary

12.1 Introduction

We think that Wittgenstein’s own, less quoted, comment can lead the way [37]: If language is to be a means of communication there must be agreement in definitions and (queer as this may sound) in judgements This seems to abolish logic, but does not do so — It is one thing to describe methods of measurement, and another to state results of measurement. This is an important vantage point for the current presentation: first it emphasizes the role of judgement, second it distinguishes the method of measurement from its result, and third it challenges the role of logic We will express these notions in the language of quantum cognition, which derives terminology and computations from quantum mechanics. The experiment is modeled using quantum cognition, laying a foundation for its implementation in future autonomous systems Incorporating such models can proactively help the user avoid mistakes that are inherent in human judgement and prevent an erosion of trust. We contend that in this way the interactions between humans and future autonomous system will become more effective

12.2 Compatible and Incompatible States
12.3 A Quantum Cognition Model for the Emergence of Trust
12.4 Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call