Abstract
Recent research suggests that reinforcement learning may underlie trait formation in social interactions with faces. The current study investigated whether the same learning mechanisms could be engaged for trait learning from voices. On each trial of a training phase, participants (N = 192) chose from pairs of human or slot machine targets that varied in the (1) reward value and (2) generosity of their payouts. Targets were either auditory (voices or tones; Experiment 1) or visual (faces or icons; Experiment 2) and were presented sequentially before payout feedback. A test phase measured participant choice behaviour, and a post-test recorded their target preference ratings. For auditory targets, we found a significant effect of reward only on target choices, but saw higher preference ratings for more generous humans and slot machines. For visual targets, findings from previous studies were replicated: participants learned about both generosity and reward, but generosity was prioritised in the human condition. These findings provide one of the first demonstrations of reinforcement learning of reward with auditory stimuli in a social learning task, but suggest that the use of auditory targets does alter learning in this paradigm. Conversely, reinforcement learning of reward and trait information with visual stimuli remains intact even when sequential presentation introduces a delay in feedback.
Highlights
Faces and voices are important social stimuli that play a key role in social cognition during interpersonal interactions (Hassin & Trope, 2000)
The attribution of traits to a social identity is consistent across contexts; while the reward value of any one particular interaction with a social partner may vary, traits are assumed to be stable across contexts (Heider, 1944)
Multiple studies have shown that people form trait impressions from briefly presented static images of unfamiliar faces (Todorov et al, 2009) and from brief utterances spoken by novel voices (McAleer et al, 2014)
Summary
Faces and voices are important social stimuli that play a key role in social cognition during interpersonal interactions (Hassin & Trope, 2000). In addition to these rapid judgements of personality, it is adaptive for people to learn about the traits of social partners through observation of their behaviour This raises the question of whether one can use interactions with face and voice stimuli to train individuals to attribute certain personality traits to a social identity. This has real-world significance for technologies that use voices to represent artificial agents, for example, mobile phone virtual assistants. We can further ask if it is possible to train individuals to attribute positive or negative traits to different voice or face identities based on experience of their behaviour in interactions
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.