We present the results of an experiment on lexical recognition of human sign language signs in which the available perceptual information about handshape and hand orientation was manipulated. Stimuli were videos of signs from Sign Language of the Netherlands (SLN). The videos were processed to create four conditions: (1) one in which neither handshape nor hand orientation could be observed, (2) one in which hand orientation could be extracted but not handshape, (3) one in which an approximation of the handshape could be seen, and (4) one where the video was unmodified. In general, recognition of the signs was almost impossible in the first two conditions, while condition 3 showed a rise in recognition rate to about 60 percent However, some signs were recognized well even in conditions 1 and 2. Their success rate cannot be linked to a single sign property but seems to be due to a combination of factors. In general, handshape information appears more salient for resolving the lexical meaning of a sign than hand orientation.