Abstract

Wearable robots have become increasingly popular in the field of rehabilitation and medical care in recent years. Human motion intent recognition is gaining traction as a key component of wearable robot operation. Existing approaches include traditional methods and convolutional neural network (CNN)-based approaches. Traditional approaches suffer from manual feature selection, which require expert knowledge thus are tedious work. CNN-based approaches treat the input data as a whole, which are unable to fully decouple the inner feature within each channel. This paper proposes a CNN based motion intent prediction network, which utilizes a multi-channel separated encoder to make full use of the data from different axes in the sensor for feature extraction and to decouple data with different degrees of freedom. Experimental results show that the method can successfully recognize motion intent, improve recognition accuracy over existing methods, and reduce recognition errors during the modal transition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call