Abstract

Intracortical brain–computer interfaces (iBCIs) translate neural activity into control commands, thereby allowing paralyzed persons to control devices via their brain signals. Recurrent neural networks (RNNs) are widely used as neural decoders because they can learn neural response dynamics from continuous neural activity. Nevertheless, excessively long or short input neural activity for an RNN may decrease its decoding performance. Based on the temporal attention module exploiting relations in features over time, we propose a temporal attention-aware timestep selection (TTS) method that improves the interpretability of the salience of each timestep in an input neural activity. Furthermore, TTS determines the appropriate input neural activity length for accurate neural decoding. Experimental results show that the proposed TTS efficiently selects 28 essential timesteps for RNN-based neural decoders, outperforming state-of-the-art neural decoders on two nonhuman primate datasets ( for monkey Indy and for monkey N). In addition, it reduces the computation time for offline training (reducing 5–12%) and online prediction (reducing 16–18%). When visualizing the attention mechanism in TTS, the preparatory neural activity is consecutively highlighted during arm movement, and the most recent neural activity is highlighted during the resting state in nonhuman primates. Selecting only a few essential timesteps for an RNN-based neural decoder provides sufficient decoding performance and requires only a short computation time.

Highlights

  • We implemented three Recurrent neural networks (RNNs)-based neural decoders that are widely used in existing Intracortical brain–computer interfaces (iBCIs): vanilla RNN, long short-term memory (LSTM), and GRU

  • For monkey Indy, the T ∗ slightly differed, which suggests that the T ∗ may be slightly affected by T for the three RNN-based neural decoders; these T ∗ values varied in a small range, with T ∗ ∈ [2, 4] for the vanilla RNN, T ∗ ∈ [5, 7] for the LSTM and T ∗ ∈ [3, 6]

  • We proposed timestep selection (TTS) to select a few essential timesteps for RNN-based neural decoders while reducing the adverse effects of stochastic noise embedded in long neural sequences

Read more

Summary

Introduction

Intracortical brain–computer interfaces (iBCIs) aim to improve the daily lives of paralyzed patients by restoring their motor functions [1,2]. An iBCI ascertains the patient’s movement intention and generates motor commands for assistive devices, such as computer cursors [3], and the functional electrical stimulation of paralyzed limbs [4]. A neural decoder translates neural activity into movement intention or spatial location information. Conventional neural decoding techniques process well-segmented neural activities in previous time windows where task-related information is likely encoded as spiking sequences. A sequence of spike count vectors from many preceding time windows is usually adopted to capture the neural response dynamics [6] and improve the decoding accuracy [7]. An excessively long neural sequence may be polluted by stochastic noise, and a short neural sequence may not contain sufficient information

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call