Abstract

A method to classify Japanese sign language (JSL) words using a gathered image generation method and a convolutional neural network (CNN) is proposed. The JSL words consist of words that are often used in information queries. Gathered images are generated based on the difference between the first image, which indicates the start position of a JSL word and target images, which indicate the motion position for a JSL word. The CNN is used to extract features from the gathered images. The JSL words are classified using a support vector machine. To show the effectiveness of the proposed method, we conducted experiments and computer simulations. We confirmed that the mean recognition accuracy for 10, 20, and 42 JSL words was 99.2%, 94.3, and 86.2%, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call