Abstract

Emotional states can be conveyed by vocal cues such as pitch and intensity. Despite the ubiquity of cellular telephones, there is limited information on how vocal emotional states are perceived during cell-phone transmissions. Emotional utterances (neutral, happy, angry) were elicited from two female talkers and simultaneously recorded via microphone and cell-phone. Ten-step continua (neutral to happy, neutral to angry) were generated using the straight algorithm. Analyses compared reaction time (RT) and emotion judgment as a function of recording type (microphone vs cell-phone). Logistic regression revealed no judgment differences between recording types, though there were interactions with emotion type. Multi-level model analyses indicated that RT data were best fit by a quadratic model, with slower RT at the middle of each continuum, suggesting greater ambiguity, and slower RT for cell-phone stimuli across blocks. While preliminary, results suggest that critical acoustic cues to emotion are largely retained in cell-phone transmissions, though with effects of recording source on RT, and support the methodological utility of collecting speech samples by phone.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call