Reliable prediction of multi-finger forces is crucial for neural-machine interfaces. Various neural decoding methods have progressed substantially for accurate motor output predictions. However, most neural decoding methods are performed in a supervised manner, i.e., the finger forces are needed for model training, which may not be suitable in certain contexts, especially in scenarios involving individuals with an arm amputation. To address this issue, we developed an unsupervised neural decoding approach to predict multi-finger forces using spinal motoneuron firing information. We acquired high-density surface electromyogram (sEMG) signals of the finger extensor muscle when subjects performed single-finger and multi-finger tasks of isometric extensions. We first extracted motor units (MUs) from sEMG signals of the single-finger tasks. Because of inevitable finger muscle co-activation, MUs controlling the non-targeted fingers can also be recruited. To ensure an accurate finger force prediction, these MUs need to be teased out. To this end, we clustered the decomposed MUs based on inter-MU distances measured by the dynamic time warping technique, and we then labeled the MUs using the mean firing rate or the firing rate phase amplitude. We merged the clustered MUs related to the same target finger and assigned weights based on the consistency of the MUs being retained. As a result, compared with the supervised neural decoding approach and the conventional sEMG amplitude approach, our new approach can achieve a higher R2 (0.77 ± 0.036 vs. 0.71 ± 0.11 vs. 0.61 ± 0.09) and a lower root mean square error (5.16 ± 0.58 %MVC vs. 5.88 ± 1.34 %MVC vs. 7.56 ± 1.60 %MVC). Our findings can pave the way for the development of accurate and robust neural-machine interfaces, which can significantly enhance the experience during human-robotic hand interactions in diverse contexts.
Read full abstract