Abstract

For many years, the working/short-term memory literature has been dominated by the study of phonological codes. Consequently, insufficient attention has been devoted to visual codes. In the present study, we attempt to remedy the situation by exploring a critical aspect of modern models of working memory, namely the principle that responses do not depend primarily on what kinds of materials are presented, but on what kinds of codes are generated from those materials. More specifically, we used the visual similarity effect as a tool to ask whether there is a generation of visual codes when information is not presented visually. In two immediate serial recall experiments, we manipulated the visual similarity (similar words, dissimilar words), the presentation modality (visual presentation, auditory presentation), and concurrent articulation (none, concurrent articulation). We observed a visual similarity effect independent of presentation modality. Comparable results were observed with two different sets of stimuli and with or without concurrent articulation. Thus, for the first time, we demonstrate that, from acoustically presented word lists, visual codes in working/short-term memory are generated, producing a visual similarity effect. It is now clear that the encoding of visual or acoustic presentation to include the opposite type of representation is bidirectional.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call