Abstract

Researchers have assumed a dichotomy between human-human trust (HHT) and human-automation trust (HAT). With the advent of cognitive agents, entities that are neither machine nor human, it is important to revisit this theory. Some claim that HHT and HAT are the same concept and propose that people respond socially to more human automation. Others say that HHT and HAT are fundamentally different and propose models that indicate differences in initial perception, automation monitoring performance, and judgments that lead to differences in trust. In this study, we varied humanness on a cognitive spectrum and investigated trust and performance with these different types of cognitive agents. Results showed that increasing the humanness of the automation increased trust calibration and appropriate compliance with an automated aid leading to better overall performance and trust, especially during unreliable conditions. Automated aids that exhibit human characteristics may be more resilient to human disuse in the face of sub-optimal machine performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.