Abstract

The control of arm movements through intracortical brain–machine interfaces (BMIs) mainly relies on the activities of the primary motor cortex (M1) neurons and mathematical models that decode their activities. Recent research on decoding process attempts to not only improve the performance but also simultaneously understand neural and behavioral relationships. In this study, we propose an efficient decoding algorithm using a deep canonical correlation analysis (DCCA), which maximizes correlations between canonical variables with the non-linear approximation of mappings from neuronal to canonical variables via deep learning. We investigate the effectiveness of using DCCA for finding a relationship between M1 activities and kinematic information when non-human primates performed a reaching task with one arm. Then, we examine whether using neural activity representations from DCCA improves the decoding performance through linear and non-linear decoders: a linear Kalman filter (LKF) and a long short-term memory in recurrent neural networks (LSTM-RNN). We found that neural representations of M1 activities estimated by DCCA resulted in more accurate decoding of velocity than those estimated by linear canonical correlation analysis, principal component analysis, factor analysis, and linear dynamical system. Decoding with DCCA yielded better performance than decoding the original FRs using LSTM-RNN (6.6 and 16.0% improvement on average for each velocity and position, respectively; Wilcoxon rank sum test, p < 0.05). Thus, DCCA can identify the kinematics-related canonical variables of M1 activities, thus improving the decoding performance. Our results may help advance the design of decoding models for intracortical BMIs.

Highlights

  • The primary motor cortex (M1) is robustly linked to the kinematic parameters of the upper limbs (Humphrey, 1972; Humphrey and Corrie, 1978; Georgopoulos et al, 1982, 1986; Sergio et al, 2005; Schwartz, 2007; Aggarwal et al, 2008; Vargas-Irwin et al, 2010)

  • The present study aims to investigate how hand velocity information is represented in canonical variables found by linear canonical correlation analysis (LCCA) or deep canonical correlation analysis (DCCA) and to compare those representations with other neural representations (PCA, factor analysis (FA), and linear dynamical system (LDS)) extracted from naïve ensemble firing rates (FRs) (ZE-FR)

  • The canonical variables were obtained from the testing set either by using LCCA or DCCA

Read more

Summary

Introduction

The primary motor cortex (M1) is robustly linked to the kinematic parameters of the upper limbs (Humphrey, 1972; Humphrey and Corrie, 1978; Georgopoulos et al, 1982, 1986; Sergio et al, 2005; Schwartz, 2007; Aggarwal et al, 2008; Vargas-Irwin et al, 2010). In addition to the population vector, neural representations capturing the shared variability in the population’s neural activity have been demonstrated to be effective in predicting behavioral covariates (Yu et al, 2009; Shenoy et al, 2013; Cunningham and Yu, 2014; Kao et al, 2015) These neural representations can be acquired through unsupervised learning techniques such as principal components analysis (PCA) (Ames et al, 2014; Kaufman et al, 2014), factor analysis (FA) (Yu et al, 2009), and a linear dynamical system (LDS) based latent-state estimation (Kao et al, 2015) and are known to allow a decoder to guarantee stable outputs (Yu et al, 2009; Kao et al, 2013). The second category is a direct method that operates based on a direct input–output function approximation from neuronal firing activities to kinematic variables (Chapin et al, 1999; Sussillo et al, 2012; Dethier et al, 2013; Ahmadi et al, 2019)

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.