A question of current theoretical interest that concerns the nature of both recall and recognition of displayed information is the contention that there is pictorial superiority effect (PSE) in memory. In recent years there has been considerable discussion about the notion that pictures are better retained than their verbal labels. There is a good deal of evidence from a variety of recall, recognition and pair-associate learning tasks that have shown this contention to be true. Just what do people remember about a picture? Questions concerning the nature of memory for pictures continue to challenge the curiosity of experimental psychologists. Are words and pictures encoded distinctively? That is, does the memory of an item contain information to indicate whether the item was presented as a word or as a picture? Is there a distinctive processing for words and pictures? The purpose of this paper is to discuss studies rendering support to the notion of pictorial superiority in memory. A number of experiments (e.g., Paivio, 1968, 1966) have shown that picture-pairs are much easier to recognize than pairs comprised of their concrete verbal labels or descriptions. Paivio and Yarmey (1966) obtained support for this hypothesis in a study in which pictures and their noun labels were factorially varied on the stimulus and response side of pairs. Further, Paivio and Yarmey demonstrated that in paired-associate learning, retention appears to function as a single effective dimension that extends from abstract nouns, to concrete nouns to pictures (or objects) in increasing order of effectiveness. The question of the superiority effect of pictorial memory has been investigated by psychologists in the field of memory research. There appear to be at least three possible explanations relative to the source of pictures/word differences. The first model (processing level model) suggests that there are analogous processing stages for pictures and words, but the time required for processing each stage is less for pictorial input. The second model (sensory semantic model) suggests that pictorial inputs require fewer processing transformations than verbal inputs before semantic processing occurs. Finally, the dual encoding model suggests two separate but interconnected verbal and non-verbal processing or symbolic systems. This model argues that pictorial stimuli generate simultaneous verbal and non-verbal long-term memory codes; whereas, words don't generally elicit their pictorial code. The finding that memory is generally better for pictorial than verbal labels is not surprising. The sensory and meaning codes for a picture are apparently more differentiating and less susceptible to interferences. Presumably, more associative cues may be attached to stimuli which have rich visual, as well as verbal associates, so that both recall and recognition performance are improved with pictorial stimuli.
Read full abstract