Abstract

This paper presents the results of a feasibility study of a deep learning scheme for sign language motion recognition. Capturing the motions used in sign language was conducted using specially designed colored gloves and an optical camera. Deep learning and conventional classification schemes were used for motion recognition, and their results are compared. In a deep learning process each frame of motion data is passed directly to AlexNet for feature extraction. Although the structure of the neural network and optional parameters for deep learning have not been optimized at this stage, it was verified that the accuracy of recognition ranged from 59.6% to 72.3% for twenty-five motions. Though this performance is inferior to that of conventional schemes, it is considered that these results indicate the feasibility of using a deep learning scheme for sign language motion recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call