Abstract

This paper proposes a method for classifying Japanese sign language (JSL) using a combined gathered image generation technique and a convolutional neural network (CNN) approach. In the combined gathered image generation, the maximum difference from the previous and next images is calculated for each block, and the block information that had maximum difference was embedded into an image on all blocks. After information on all images has been gathered into single words, the CNNs are used to extract features for the classification of JSL words. A multi-class support vector machine (SVM) is then used to classify words related to greeting and requesting. The mean and the standard deviation of the recognition accuracy of the proposed method were experimentally shown to be 84.2% and 4%, respectively. These results suggest that it is possible to obtain information for classifying 10 JSL words using the proposed combined gathered image generation and CNN approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call