Abstract

Ross's 1981 model of right-hemisphere processing of affective speech components was investigated within the dichotic paradigm. A spoken sentence constant in semantic content but varying among mad, sad, and glad emotional tones was presented to 45 male and 45 female college students. Duration of stimuli was controlled by adjusting digital sound samples to a uniform length. No effect of sex emerged, but the hypothesized ear advantage was found: more correct identifications were made with the left ear than with the right. A main effect of prosody was also observed, with significantly poorer performance in identifying the sad tone; in addition, sad condition scores for the right ear were more greatly depressed than those for the left ear, resulting in a significant interaction of ear and prosody.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call