Abstract

This study developed an experimental paradigm (CAPTCHA recognition task) with high ecological validity to investigate how continuous errors in an automatic system and the timing of their occurrence affect human-automation trust. The continuous system errors were manipulated to appear at either of the four timing conditions: the early stage, middle stage, late stage of the task, or not showing. Our research found that continuous errors undermines trust in automated systems. More importantly, even with the same average system reliability, overall trust decreases significantly with continuous errors. Human-automation trust is significantly lower in the late continuous error condition compared to the no continuous error condition, indicating that trust in automated systems accords with the peak-end rule. Thus, user trust is mainly affected by the peak and end values of the system reliability. This study provides new suggestions for a trustworthy artificial intelligence design. Although system errors cannot be eliminated thoroughly, developers can minimize their impact on human-automation trust by avoiding continuous errors and preventing them from occurring during the late stage of interaction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call