Abstract

ForceMyography (FMG) is an emerging competitor to surface ElectroMyography (sEMG) for hand gesture recognition. Most of the state-of-the-art research in this area explores different machine learning algorithms or feature engineering to improve hand gesture recognition performance. This paper proposes a novel signal processing pipeline employing a manifold learning method to produce a robust signal representation to boost hand gesture classifiers’ performance. We tested this approach on an FMG dataset collected from nine participants in 3 different data collection sessions with short delays between each. For each participant’s data, the proposed pipeline was applied, and then different classification algorithms were used to evaluate the effect of the pipeline compared to raw FMG signals in hand gesture classification. The results show that incorporating the proposed pipeline reduced variance within the same gesture data and notably maximized variance between different gestures, allowing improved robustness of hand gestures classification performance and consistency across time. On top of that, the pipeline improved the classification accuracy consistently regardless of different classifiers, gaining an average of 5% accuracy improvement.

Highlights

  • Hand gesture recognition has been widely applied in areas varying from simulated environments such as virtual reality (VR) integration [1] to real-world environments such as human-robot interaction [2,3] and prosthesis control [4]

  • By employing machine learning algorithms, hand gestures can be classified based on visual data from cameras [5], inertial data obtained from a gyroscope or accelerometer [6], or muscle activity data such as surface Electromyography [7,8]

  • This paper proposes a novel pre-processing pipeline to reduce the stochastic variance of Force myography (FMG) signals in hand gesture classification

Read more

Summary

Introduction

Hand gesture recognition has been widely applied in areas varying from simulated environments such as virtual reality (VR) integration [1] to real-world environments such as human-robot interaction [2,3] and prosthesis control [4]. By employing machine learning algorithms, hand gestures can be classified based on visual data from cameras [5], inertial data obtained from a gyroscope or accelerometer [6], or muscle activity data such as surface Electromyography (sEMG) [7,8]. SEMG is the most established muscle activity-based hand gesture recognition technique [11,12] whose sensors are mounted on the upper limb to detect muscles’ electrical signals for gesture classification. FMG-based hand gesture recognition method utilizes an array of force-resisting sensors surrounding a specific part of the limb to capture the underlying musculotendinous complex’s volumetric changes during performing gestures [20]. Jiang et al [22] analyzed FMG sensors’ performance in hand gesture classification compared to that of sEMG. The results show that using as few as 8 FMG sensors for classification obtained accuracy as good as those of commercially available sEMG sensors in a controlled study

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.