Abstract

Autonomous vehicles use sensors and artificial intelligence to drive themselves. Surveys indicate that people are fascinated by the idea of autonomous driving, but are hesitant to relinquish control of the vehicle. Lack of trust seems to be the core reason for these concerns. In order to address this, an intelligent agent approach was implemented, as it has been argued that human traits increase trust in interfaces. Where other approaches mainly use anthropomorphism to shape appearances, the current approach uses anthropomorphism to shape the interaction, applying Gricean maxims (i.e., guidelines for effective conversation). The contribution of this approach was tested in a simulator that employed both a graphical and a conversational user interface, which were rated on likability, perceived intelligence, trust, and anthropomorphism. Results show that the conversational interface was trusted, liked, and anthropomorphized more, and was perceived as more intelligent, than the graphical user interface. Additionally, an interface that was portrayed as more confident in making decisions scored higher on all four constructs than one that was portrayed as having low confidence. These results together indicate that equipping autonomous vehicles with interfaces that mimic human behavior may help increasing people’s trust in, and, consequently, their acceptance of them.

Highlights

  • Autonomous vehicles are vehicles that can drive without human control

  • This paper presented a study that investigated whether a conversational interface that is portrayed as being aware of its limitations and transparent about them is trusted and anthropomorphized more than a more generic graphical user interface

  • Similar effects were found for the interface with high confidence compared to the one with low confidence. These results indicate that a conversational interface that explains why it behaves in a certain way is trusted more, is considered to be more intelligent, is seen as more human-like, and is liked more

Read more

Summary

Introduction

Autonomous vehicles are vehicles that can drive without human control This lack of control makes people uncertain whether the vehicles are reliable, and what they will do and why [1]. One solution to this could be to make the working of the vehicle more transparent, by transforming it into an intelligent agent that explains what it is doing. This explanation can be provided through a conversational interface that allows the driver to obtain information about what the car is doing in a natural way [2]. The objective of this paper is to investigate effects of providing explanations about behavioral decisions through a conversational interface on people’s trust in and perceived agency of a self-driving car

Objectives
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.