PurposeThis study aims to address the issue that existing methods for limb action recognition typically assume a fixed wearing orientation of inertial sensors, which is not the case in real-world human-robot interaction due to variations in how operators wear it, installation errors, and sensor movement during operation.Design/methodology/approachTo address the resulting decrease in recognition accuracy, this paper introduced a data transformation algorithm that integrated the Euclidean norm with singular value decomposition. This algorithm effectively mitigates the impact of orientation errors on data collected by inertial sensors. To further enhance recognition accuracy, this paper proposed a method for extracting features that incorporate both time-domain and time-frequency domain features, markedly improving the algorithm’s robustness. This paper used five classifiers to conduct comparative experiments on action recognition. Finally, this paper built an experimental human-robot interaction platform.FindingsThe experimental results demonstrate that the proposed method achieved an average action recognition accuracy of 96.4%, conclusively proving its effectiveness. This approach allows for the recognition of data from sensors placed in any orientation, using only training samples conducted at an orientation.Originality/valueThis study addresses the challenge of reduced accuracy in limb action recognition caused by sensor misorientation. The human-robot interaction system developed in this paper was experimentally verified to effectively and efficiently guide the industrial robot to perform tasks based on the operator’s limb actions.
Read full abstract