Abstract

Human activity recognition is an important and difficult topic to study because of the important variability between tasks repeated several times by a subject and between subjects. This work is motivated by providing time-series signal classification and a robust validation and test approaches. This study proposes to classify 60 signs from the American Sign Language based on data provided by the LeapMotion sensor by using different conventional machine learning and deep learning models including a model called DeepConvLSTM that integrates convolutional and recurrent layers with Long-Short Term Memory cells. A kinematic model of the right and left forearm/hand/fingers/thumb is proposed as well as the use of a simple data augmentation technique to improve the generalization of neural networks. DeepConvLSTM and convolutional neural network demonstrated the highest accuracy compared to other models with 91.1 (3.8) and 89.3 (4.0) % respectively compared to the recurrent neural network or multi-layer perceptron. Integrating convolutional layers in a deep learning model seems to be an appropriate solution for sign language recognition with depth sensors data.

Highlights

  • Sign language is a language that mainly uses hand kinematics and facial expressions

  • Human Activity Recognition (HAR) in general is an important and challenging topic o address because of the large variability that exists for a given task

  • The aim of this study was to classify 60 signs of American Sign Language (ASL) using deep neural networks with a forearm, hand, and finger kinematics models from joint position data provided by the LeapMotion

Read more

Summary

Introduction

Sign language is a language that mainly uses hand kinematics and facial expressions. It is widely used by hearing-impaired people to communicate with each other, but rarely with people who do not have hearing impairment. An alternative would be to have a real-time translation with interpreters, but they are not permanently available and can be rather expensive. A system that could enable automatic translation would be of great interest. Human Activity Recognition (HAR) in general is an important and challenging topic o address because of the large variability that exists for a given task. Whether the variability comes from a subject repeating an action several times or more importantly between subjects, the kinematics behavior over time presents a certain challenge to generalize. HAR considers that these behaviors are represented by specific patterns that could be classified using machine learning algorithms

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call