Abstract

As intelligent machines have become widespread in various applications, it has become increasingly important to operate them efficiently. Monitoring human operators’ trust is required for productive interactions between humans and machines. However, neurocognitive understanding of human trust in machines is limited. In this study, we analysed human behaviours and electroencephalograms (EEGs) obtained during non-reciprocal human-machine interactions. Human subjects supervised their partner agents by monitoring and intervening in the agents’ actions in this non-reciprocal interaction, which reflected practical uses of autonomous or smart systems. Furthermore, we diversified the agents with external and internal human-like factors to understand the influence of anthropomorphism of machine agents. Agents’ internal human-likenesses were manifested in the way they conducted a task and affected subjects’ trust levels. From EEG analysis, we could define brain responses correlated with increase and decrease of trust. The effects of trust variations on brain responses were more pronounced with agents who were externally closer to humans and who elicited greater trust from the subjects. This research provides a theoretical basis for modelling human neural activities indicate trust in partner machines and can thereby contribute to the design of machines to promote efficient interactions with humans.

Highlights

  • Human trust in machine partners has different characteristics from trust in human partners[1,5,6,7]

  • Studies using functional magnetic resonance imaging have demonstrated different brain activation in response to untrustworthy human faces compared with trustworthy faces[12,13] and investigated the neural correlates of building trust during interactions between humans[14]

  • The results demonstrated that the theta band (4~8 Hz) power at approximately 0.4 s decreased after agents’ correct decisions (ACs) and increased after agents’ wrong decisions (AWs)

Read more

Summary

Introduction

Human trust in machine partners has different characteristics from trust in human partners[1,5,6,7]. Studies using functional magnetic resonance imaging (fMRI) have demonstrated different brain activation in response to untrustworthy human faces compared with trustworthy faces[12,13] and investigated the neural correlates of building trust during interactions between humans[14]. To the best of our knowledge, there was no previous attempt to investigate neural correlates of human trust in automated agents during non-reciprocal interactions. We designed and conducted an experiment for non-reciprocal interactions between humans and machine agents. We measured EEG responses and investigated human neural responses related to the development, maintenance, and degradation of situational or learned trust[22] in machine teammates and the factors that influence that trust. We hypothesized that human-likenesses of automated agents will have a significant impact on behavioural and neural responses of human supervisors related to trust variations and formations

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call