Abstract

Natural control of assistive devices requires continuous positional encoding and decoding of the user's volition. Human movement is encoded by recruitment and rate coding of spinal motor units. Surface electromyography provides some information on the neural code of movement and is usually decoded into finger joint angles. However, the current approaches to mapping the electrical signal into joint angles are unsatisfactory. There are no methods that allow precise estimation of joint angles during natural hand movements within the large numbers of degrees of freedom of the hand. We propose a framework to train a neural network from digital cameras and high-density surface electromyography from the extrinsic (forearm and wrist) hand muscles. Furthermore, we show that our 3D convolutional neural network optimally predicted 14 functional flexion/extension joints of the hand. We found in our experiments (4 subjects; mean age of 26±2.12 years) that our model can predict individual sinusoidal finger movement at different speeds (0.5 and 1.5 Hz), as well as two and three finger pinching, and hand opening and closing, covering 14 degrees of freedom of the hand. Our deep learning method shows a mean absolute error of 2.78±0.28 degrees with a mean correlation coefficient between predicted and expected joint angles of 0.94, 95% confidence interval (CI) [0.81, 0.98] with simulated real-time inference times lower than 30 milliseconds. These results demonstrate that our approach is capable of predicting the user's volition similar to digital cameras through a non-invasive wearable neural interface. Clinical relevance- This method establishes a viable interface that can be used for both immersive virtual reality medical simulations environments and assistive devices such as exoskeleton and prosthetics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call