Abstract

Does an audible frown or smile affect speech comprehension? Previous research suggests that a spoken word is recognized faster if its audible affect (frown or smile) matches its semantic valence. In the present study, listeners' task was to evaluate the valence of spoken affective sentences. Formants were raised or lowered using LPC to convey an audible smile or frown gesture co-produced with the stimulus speech. A crucial factor was the talker's perspective in the event being described verbally, in either first or third person. With first-person sentences, listeners may relate the talker's affective state (simulated by formant shift) to the valence of the utterance. For example, in “I have received a prize,” a smiling articulation is congruent with the talker having experienced a happy event. However, with third-person sentences (“he has received a prize”), listeners cannot relate the talker's affective state to the described event. (In this example, the talker's affect can be empathic and positive, or envious and negative.) Listeners' response times confirm this hypothesized interaction: congruent utterances are processed faster than incongruent ones, but only for first-person sentences. When listeners evaluate spoken sentences, they combine audible affect, verbal content, as well as perspective, in a sophisticated manner.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call