Short-term memory (STM) for signs in native signers consistently shows a smaller capacity than STM for words in native speakers (see Emmorey, 2002, for review). One explanation of this difference is based on the length effect: Short items yield higher spans than items that take longer to pronounce, presumably because of limited processing time. Signs in American Sign Language (ASL) take longer to articulate than English words (Bellugi & Fischer, 1972). This is not problematic in natural language use, because ASL conveys information simultaneously. However, with immediate serial recall, articulation time looms large. Some researchers have argued that articulation time is sufficient to account for the sign-speech difference in STM (Emmorey, 2002; Marschark & Mayer, 1998; Wilson, 2001; Wilson & Emmorey, 1997). If so, then STM capacity is, at its root, governed by a general processing limitation that is not affected by language modality. However, this claim has never been adequately tested. If articulation time does not fully account for the sign-speech difference in STM, then other differences between sign and speech may be important. In particular, because vision and audition have strikingly different information-processing characteristics, the sign-speech difference could be due to perceptually based coding. If so, then the principles governing STM are locally determined and cannot be generalized across language modalities. Recently Boutla, Supalla, Newport, and Bavelier (2004) addressed this question using stimuli that are articulated very rapidly in ASL. The digits 1 through 9 and the letters of the fingerspelling alphabet in ASL are produced with the fingers of one hand without large-scale movement, and therefore can be produced very quickly. However, the hand shapes for the digits 1 through 9 in ASL are similar, and formational similarity reduces STM (Klima & Bellugi, 1979; Wilson & Emmorey, 1997). Therefore, Boutla et al. used the Digit Span task from the Wechsler Adult Intelligence Scale (WAIS), but substituted ASL letters for digits. Signers were tested with ASL letters, and speakers with spoken English digits. Span was still longer for English than ASL. The authors concluded that STM for spoken language benefits from auditory-based representations and does not reflect a standard capacity of STM that applies across domains. However, that study compared signed letters with spoken digits, and recent evidence suggests that digits have a special status in STM, yielding better performance than otherwise matched lexical items (Jeffries, Patterson, Jones, Bateman, & Ralph, 2004). Thus, digits and letters may not be comparable categories for testing STM. A better option, then, would be to compare ASL letters with English letters. We report here the results of three experiments. The first two verified that digits yield better STM than letters. The third experiment returned to the original question: whether superiority of spoken language in STMpersists when articulatory duration is controlled. We used theWAIS Digit Span task (Wechsler, 1955), in which sequences of items are presented at a rate of one per second and must be repeated by the participant in the correct order. Sequences increase in length, with two sequences of each length, and the test concludes when the participant fails on both sequences of a particular length. One point is awarded for every correct sequence.
Read full abstract