Abstract

The goal of this study was to investigate how individuated finger movements are coded in area AIP, F5 and M1. At the single neuron level, most of the units in all three areas were “broadly tuned”, responding during multiple movements with different firing rate amplitudes. However, the specific tuning dynamics of these three areas were distinct. After the cue onset, the percentage of tuned AIP units was significantly higher than in the other two areas, while before the beginning of hold epoch the percentage of tuned M1 units significantly exceeded that of area AIP and F5. This trend was well in line with the partial correlation coefficient (pcc) between error trials and corresponding correct trials, an analysis capable of disentangling the encoding of visual and movement components. These results showed the more visual-dominant property of AIP and the more movement-dominant property of M1. Due to the temporal complexity and heterogeneity at the single-neuron level, it was necessary to see all the units as a population. Under this perspective, each recorded unit was considered as one dimension in a state-space, and the population firing rates involving over time form a neural trajectory through this space. Plotting the first three principal components yielded a low-dimensional trajectory that can be visualized and that still represented >75% of the total variance of the original data. In area AIP and M1, the trajectories of the five conditions were quite divergent. F1 and F2 trajectories were far apart from each other and F3 trajectory was in between. In order to separate the condition-dependent variance and the time-dependent (condition-independent) variance, we applied demixed principal component analysis (dPCA). In all three areas, the five conditions were well separated when projecting the neural data onto the decoder axes of the first two condition-dependent dPCs. However, the proportion of variance explained by the condition-dependent components in area F5 were distinct from area AIP and M1. In area AIP and M1, the proportion was about 50%, but it was 20% in area F5. Based on the condition-dependent components separated by dPCA, we further tested the relationship between a double movement and its two corresponding single movements. If a linear combination of the two single movements can reconstruct the combined double movement better than the third single movements, the double movement can be seen as a combination of the two single movements in the neuronal state-space, instead of an independent movement type. In fact, the goodness of fit for reconstructions of the double movements was not significantly different from the goodness of fit for reconstructions of the third single movement, suggesting that both double movements should be considered as independent movement types. We performed online decoding to investigate the potential application for a hand prosthesis capable of moving fingers independently, since most of the state-of-art prostheses only have one DOF opening and closing the hand. Using neural signal from all three areas, the real-time decoding performance during the hold epoch was 80%, and with offline manual spike sorting, the performance reached 89%. In addition to M1, we also demonstrated the potential benefits of using area F5 and AIP for prosthesis control.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.