Abstract

We present the results of an experiment on lexical recognition of human sign language signs in which the available perceptual information about handshape and hand orientation was manipulated. Stimuli were videos of signs from Sign Language of the Netherlands (SLN). The videos were processed to create four conditions: (1) one in which neither handshape nor hand orientation could be observed, (2) one in which hand orientation could be extracted but not handshape, (3) one in which an approximation of the handshape could be seen, and (4) one where the video was unmodified. In general, recognition of the signs was almost impossible in the first two conditions, while condition 3 showed a rise in recognition rate to about 60 percent However, some signs were recognized well even in conditions 1 and 2. Their success rate cannot be linked to a single sign property but seems to be due to a combination of factors. In general, handshape information appears more salient for resolving the lexical meaning of a sign than hand orientation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.