Abstract

Prosody plays a vital role in verbal communication. It is important for the expression of emotions but also carries information on sentence stress or the distinction between questions and statements. Cochlear Implant (CI) recipients are restricted in the use of acoustic prosody cues, especially in terms of the voice fundamental frequency. However, prosody is also perceived visually, as head and facial movements accompany the vocal expression. To date, few studies have addressed multimodal prosody perception in CI users. Controlled manipulations of acoustic cues are a valuable method to uncover and quantify prosody perception. For visual prosody, however, such a technique is more complicated. We describe a novel approach based on animations via virtual humans. Such a method has the advantage that–in parallel to acoustic manipulations–head and facial movements can be parametrized. It is shown that animations based on a virtual human generally provide similar motion cues as video recordings of a real talker. Parametrization yields fine-grained manipulation of visual prosody, which can be combined with modifications of acoustic features. This allows generating both congruent and incongruent stimuli with different salience. Initial results of using this method with CI recipients are presented and discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call