Abstract
By analyzing the error scores of normal participants asked to identify a specific word spoken in a specific tone of voice (for example, the word “tower” spoken in a happy tone of voice), we have been able to demonstrate concurrent verbal and affective cerebral laterality effects in a dichotic listening task. The targets comprised the 16 possible combinations of four two-syllable words spoken in four different tones of voice. There were 128 participants equally divided between left- and right-handers, with equal numbers of each sex within each handedness group. Each participant responded to 144 trials on the dichotic task, and filled in the 32-item Waterloo Handedness Questionnaire. Analysis of false positive responses on the dichotic task (responding “yes” when only the verbal or only the affective component of the target was present, or when both components were present but were at opposite ears) indicated that significantly more errors were made when the verbal aspect of the target appeared at the right ear (left hemisphere) and the emotional aspect was at the left ear (right hemisphere) than when the reverse was the case. A single task has generated both effects, so that differences in participants' strategies or the way in which attention is biased cannot account for the results. While the majority of participants showed a right-ear advantage for verbal material and a left-ear advantage for nonverbal material, these two effects were not correlated, suggesting that independent mechanisms probably underly the establishment of verbal and affective processing. We found no significant sex or handedness effects, though left-handers were much more variable than were right-handers. There were no significant correlations between degree of handedness as measured on the handedness questionnaire and extent of lateralization of verbal or affective processing on the dichotic task. We believe that this general technique may be able to provide information as to the nature and extent of interhemispheric integration of information, and is easily adaptable to other modalities, thus holds great promise for future research.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have