Abstract

Avoiding human overtrust in machines is a vital issue to establish a socially acceptable advanced driver assistance system (ADAS). However, research has not clarified the effective way of designing an ADAS to prevent driver overtrust in the system. It is necessary to develop a theoretical framework that is useful to understand how a human trust becomes excessive. This paper proposes a trust model by which overtrust can be clearly defined. It is shown that at least three types of overtrust are distinguished on the basis of the model. As an example, this paper discusses human overtrust in an adaptive cruise control (ACC) system. By conducting an experiment on a medium-fidelity driving simulator, we observed two types of overtrust among the three. The first one is that some drivers relied on the ACC system beyond its limit of decelerating capability. The second one is that a driver relied on the ACC systems by expecting that it could decelerate against a stopped vehicle. It is estimated through data analysis how those kinds of overtrust emerged. Furthermore, the possible ways for prevention of human overtrust in ADAS are discussed.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.