Abstract

Safety and efficiency of human-AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents. Over-trusting the autonomous system sometimes causes serious safety issues. Although many studies focused on the importance of system transparency in keeping proper trust calibration, the research in detecting and mitigating improper trust calibration remains very limited. To fill these research gaps, we propose a method of adaptive trust calibration that consists of a framework for detecting the inappropriate calibration status by monitoring the user’s reliance behavior and cognitive cues called “trust calibration cues” to prompt the user to reinitiate trust calibration. We evaluated our framework and four types of trust calibration cues in an online experiment using a drone simulator. A total of 116 participants performed pothole inspection tasks by using the drone’s automatic inspection, the reliability of which could fluctuate depending upon the weather conditions. The participants needed to decide whether to rely on automatic inspection or to do the inspection manually. The results showed that adaptively presenting simple cues could significantly promote trust calibration during over-trust.

Highlights

  • There are growing interests in automation and autonomous AI technologies in many fields of applications

  • We focused on the problem of over-trust or under-trust and propose a novel method of adaptive trust calibration that consists of a framework for detecting the inappropriate status of calibration and cognitive cues called “trust calibration cues” (TCCs) to prompt the user to reinitiate trust calibration

  • With the framework and TCCs described above, we propose a method of adaptive trust calibration as follows. (Details of the detection algorithm will be described later)

Read more

Summary

Introduction

There are growing interests in automation and autonomous AI technologies in many fields of applications. Other expanding application areas such as robotics, autonomous web-based systems, and decision aids are changing all aspects of our daily life. Collaboration between human users and autonomous AI agents is always essential as such technologies are never perfect. One key aspect of such collaborations is that users trust the agents. Successful collaborations between users and agents would require the users to appropriately adjust their level of trust with the actual reliability of the agents. This process is called trust calibration [1, 2].

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call