Abstract

Stroke remains a leading cause of disability, presenting significant challenges to individuals and society. The post-stroke rehabilitation process demands prolonged professional training and evaluation. To tackle the issue of limited resources hindering patients’ access to frequent assessments and to facilitate personalized rehabilitation treatment, wearable technology has emerged as a promising solution. However, current wearable-based approaches often rely solely on raw sensor data fed directly into deep neural networks, which may not effectively capture intricate temporal relationships unless they incorporate the knowledge typically employed in clinical analysis. In this study, we introduce a hybrid multi-feature neural network that combines manually designed features, commonly utilized in clinical analysis, with latent features generated by deep networks. By explicitly considering the motion context and spatio-temporal relations among multiple body parts in the upper limb, our model can accurately detect their real-time motions. Empirical evaluations on our proprietary dataset reveal that the accuracy for subject-dependent and subject-independent experiments on 8 coarse-grained actions is 0.9849 and 0.9871, respectively, while for 24 fine-grained actions, they are 0.9724 and 0.9829, respectively. These results indicate that our model exhibits superior performance compared to other methods, contributing to the advancement of stroke rehabilitation and personalized therapy utilizing wearable systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call