Abstract

Event-related potentials, as well as reaction times and performance accuracies, were recorded from normal young adults during the performance of a memory-scanning task, in response to the first and second items of the memorized set and to the probe. Stimuli included computer-generated digits, presented by earphones as speech (lexical auditory) or on a screen (lexical visual), meaningless voices (nonlexical auditory) with precisely the same frequency contents as the digits, or meaningless shapes with the very same colors and contours as the digits (nonlexical visual). The evoked potentials′ late positivity (P3) to memorized items was earlier to auditory than to visual stimuli. P3 to memorized items and to probes was earlier to lexical than to nonlexical stimuli. P3 amplitudes to both memorized items and probes were smaller with auditory stimuli. Assuming P3 latency to reflect processing time and amplitude to reflect attentional allocation (effort) to the task-relevant stimuli, the results support phonological representations during processing in short-term memory, with nonauditory and nonlexical stimuli requiring more processing time and effort. A significant electrode × modality × lexicality interaction may suggest that stimuli of different modalities and lexicality involve variations in the relative contributions of the brain structures involved in their processing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call