Abstract

Two studies investigated the ramifications of encoding spatial locations via signing space for perspective choice in American Sign Language. Deaf signers (“speakers”) described the location of one of two identical objects either to a present addressee or to a remote addressee via a video monitor. Unlike what has been found for English speakers, ASL signers did not adopt their addressee’s spatial perspective when describing locations in a jointly viewed present environment; rather, they produced spatial descriptions utilizing shared space in which classifier and deictic signs were articulated at locations in signing space that schematically mapped to both the speaker’s and addressee’s view of object locations within the (imagined) environment. When the speaker and addressee were not jointly viewing the environment, speakers either adopted their addressee’s perspective via referential shift (i.e. locations in signing space were described as if the speaker were the addressee) or speakers expressed locations from their own perspective by describing locations from their view of a map of the environment and the addressee’s position within that environment. The results highlight crucial distinctions between the nature of perspective choice in signed languages in which signing space is used to convey spatial information and spoken languages in which spatial information is conveyed by lexical spatial terms. English speakers predominantly reduce their addressee’s cognitive load by adopting their addressee’s perspective, whereas in ASL shared space can be used (there is no true addressee or speaker perspective) and in other contexts, reversing speaker perspective is common in ASL and does not increase the addressee’s cognitive load.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call