Abstract

This paper presents a method of classifying sign language motion using the feature elements extracted by using pre-trained networks. Good results for the image recognition were obtained using this approach. Sign language motions are diverse and complex, so it is difficult to manually extract appropriate feature elements from them. Furthermore, it is not realistic to collect a lot of sign language motion data for applying to deep learning. Therefore, it is thought that the possibility of sign language recognition system will be greatly enhanced if a pre-trained network model can be used. Feature elements of 25 types of sign language motions were extracted using the pre-trained network models including AlexNet. Trained models of sign language motions were created by Long Short Time Memory (LSTM) using feature element data, and the classification performance was evaluated. The results confirmed that an average classification rate 70.6% can be obtained with feature elements using the VGG-16 network model and the trained model created by LSTM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call