Abstract

We present a computational model capable of predicting—above human accuracy—the degree of trust a person has toward their novel partner by observing the trust-related nonverbal cues expressed in their social interaction. We summarize our prior work, in which we identify nonverbal cues that signal untrustworthy behavior and also demonstrate the human mind's readiness to interpret those cues to assess the trustworthiness of a social robot. We demonstrate that domain knowledge gained from our prior work using human-subjects experiments, when incorporated into the feature engineering process, permits a computational model to outperform both human predictions and a baseline model built in naiveté of this domain knowledge. We then present the construction of hidden Markov models to investigate temporal relationships among the trust-related nonverbal cues. By interpreting the resulting learned structure, we observe that models built to emulate different levels of trust exhibit different sequences of nonverbal cues. From this observation, we derived sequence-based temporal features that further improve the accuracy of our computational model. Our multi-step research process presented in this paper combines the strength of experimental manipulation and machine learning to not only design a computational trust model but also to further our understanding of the dynamics of interpersonal trust.

Highlights

  • Robots have an immense potential to help people in domains such as education, healthcare, manufacturing, and disaster response

  • The support vector machines (SVMs)-S model is estimated to have a mean prediction error (MPE) of 1.00, and its hyper-parameters vary more than the SVM-D model

  • In comparing to other baselines, we found the SVM-D model to significantly outperform a random model, which uniformly guesses either 0, 1, 2, 3, or 4 tokens

Read more

Summary

Introduction

Robots have an immense potential to help people in domains such as education, healthcare, manufacturing, and disaster response. Researchers have designed robots that take steps toward helping children learn a second language (Kanda et al, 2004), assisting nurses with triage (Wilkes et al, 2010), and participating as part of a search and rescue team (Jung et al, 2013) As such robots begin to collaborate with us, we should consider mediating interpersonal or social factors that can affect the outcome of the human-robot team. Nonverbal behaviors include body language, social touch, facial expressions, eye-gaze patterns, proxemics (i.e., interpersonal distancing), and vocal acoustics such as prosody and tone Through these nonverbal expressions, we communicate mental states such as thoughts and feelings (Ambady and Weisbuch, 2010). This article describes the first work toward computationally predicting the trusting behavior of an individual toward a social partner

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.