Abstract

The current study investigated people’s ability to discriminate prosody related head and face motion from videos showing only the upper face of the speaker saying the same sentence with different prosody. The first two experiments used a visual–visual matching task. These videos were either fully textured (Experiment 1) or showed only the outline of the speaker’s head (Experiment 2). Participants were presented with two stimulus pairs of silent videos, with their task to select the pair that had the same prosody. The overall results of the visual–visual matching experiments showed that people could discriminate same- from different-prosody sentences with a high degree of accuracy. Similar levels of discrimination performance were obtained for the fully textured (containing rigid and non-rigid motions) and the outline only (rigid motion only) videos. Good visual–visual matching performance shows that people are sensitive to the underlying factor that determined whether the movements were the same or not, i.e., the production of prosody. However, testing auditory–visual matching provides a more direct test concerning people’s sensitivity to how head motion/face motion relates to spoken prosody. Experiments 3 (with fully textured videos) and 4 (with outline only videos) employed a cross-modal matching task that required participants to match auditory with visual tokens that had the same prosody. As with the previous experiments, participants performed this discrimination very well. Similarly, no decline in performance was observed for the outline only videos. This result supports the proposal that rigid head motion provides an important visual cue to prosody.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call