Abstract

The question of apparent discrepancies in short-term memory capacity for sign language and speech has long presented difficulties for the models of verbal working memory. While short-term memory (STM) capacity for spoken language spans up to 7 ± 2 items, the verbal working memory capacity for sign languages appears to be lower at 5 ± 2. The assumption that both auditory and visual communication (sign language) rely on the same memory buffers led to the claims of impairment of STM buffers in sign language users. Yet, no common model deals with both the sensory and linguistic nature of spoken and sign languages. The authors present a generalized neural model (GNM) of short-term memory use across modalities, which accounts for experimental results in both sign and spoken languages. GNM postulates that during hierarchically organized processing phases in language comprehension, spoken language users rely on neural resources for spatial representation in sequential rehearsal strategy, i.e., the phonological loop. The spatial nature of sign language precludes signers from utilizing a similar ‘overflow’ strategy, which speakers rely on to extend their STM capacity. This model offers a parsimonious neuroarchitectural explanation for the conflict between spatial and linguistic processing in spoken language, as well as the differences observed in STM capacity for sign and speech.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call