Abstract

A human in-hand motion (HIM) recognition system based on multi-modal perception information fusion is proposed in this paper, which can observe the state information between the object and the hand by using customized ten kinds of HIM manipulation in order to recognize the complex HIMs. First, combined with the characteristics of HIM capture, ten kinds of HIM sets are designed, and finger trajectory, contact force and electromyographic signal data are acquired synchronously through the multi-modal data acquisition platform; second, motion segmentation is realized through the threshold segmentation method, the multi-modal signal preprocessing is realized by Empirical Mode Decomposition (EMD), and multi-modal signal feature extraction is realized by Maximum Lyapunov Exponent (MLE); then, a detailed non-linear data analysis is carried out. A detailed analysis and discussion are presented from the results of the Random Forest (RF) recognizing HIMs, the comparison results of motion recognition rates of different subjects, the comparison results of motion recognition rates of different perceptrons, and the comparison results of the motion recognition rates of different machine learning methods. The experimental results show that the multi-modal perception information based HIM recognition system proposed in this paper can effectively recognize ten different HIMs, with an accuracy rate of 93.72%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call