Abstract
One of the key problems in lower limb-based human–computer interaction (HCI) technology is to use wearable devices to recognize the wearer's lower limb motions. The information commonly used to discriminate human motion mainly includes biological and kinematic signals. Considering that unimodal signals do not provide enough information to recognize lower limb movements, in this paper, we proposed a Vision Transformer (ViT)-based architecture for lower limb motion recognition from multichannel Mechanomyography (MMG) signals and kinematic data. Firstly, we applied the self-attention mechanism to enhance each input channel signal. Then the data was fed into ViT model. Vision Transformer-based Lower Limb Motion Recognition (ViT - LLMR) architecture proposed in this paper can avoid the model training problems such as autonomous feature extraction and feature selection for machine learning, and the model can recognize eight lower limb motions containing six subjects with an accuracy of 94.62%. In addition, we analyzed the generalization ability of the model when undersampling and only collecting fragment signals. In conclusion, the proposed ViT - LLMR architecture could provide a basis for practical applications in different HCI fields.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.