Abstract

Retinal visual prosthetic devices aim to restore vision via electrical stimulation delivered on the retina. While a number of devices have been commercially available, the stimulation strategies applied have not met the expectations of end-users. These stimulation strategies involve the neurons being activated based on their spatial properties, regardless of their functions, which may lead to lower visual acuity. The ability to predict light-evoked neural activities thus becomes crucial for the development of a retinal prosthetic device with better visual acuity. In addition to temporal nonlinearity, the spatial relationship between the 2-dimensional light stimulus and the spiking activity of neuron populations is the main barrier to accurate predictions. Recent advances in deep learning offer a possible alternative for neural activity prediction tasks. With proven performance on nonlinear sequential data in fields such as natural language processing and computer vision, the emerging transformer model may be adapted to predict neural activities. In this study, we built and evaluated a deep learning model based on the transformer to explore its predictive capacity in light-evoked retinal spikes. Our preliminary results show that the model is possible to achieve good performance in this task. The high versatility of deep learning models may allow us to make retinal activity predictions in more complex physiological environments and potentially enhance the visual acuity of retinal prosthetic devices in the future by enabling us to anticipate the desired neural responses to electrical stimuli.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call