Abstract

Hand gesture recognition has attracted the attention of many scientists, because of its high applicability in fields such as sign language expression and human machine interaction. Many approaches have been deployed to detect and recognize hand gestures, like wearable devices, image information, and/or a combination of sensors and computer vision. However, the method of using wearable sensors brings much higher accuracy and is less affected by occlusion, lighting conditions, and complex background. Existing solutions separately utilize sensor information and/or only use sensor information processing and decision-making algorithms over conventional threshold comparison algorithms and do not analyze data or utilize machine learning algorithms. In this paper, a multi-modal solution is proposed that combines information for measuring the curvature of the fingers and sensors for measuring angular velocity and acceleration. The provided information from the sensors is normalized and analyzed and various fusion strategies are used. Then, the most suitable algorithm for these sensor-based multiple modalities is proposed. The proposed system also analyzes the differences between gestures and actions that are almost similar but in fact, they are just normal moving gestures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call