Abstract

Visual neuroprostheses are emerging as a promising technology to restore a rudimentary form of vision to people living with incurable blindness. However, phosphenes elicited by current devices often appear artificial and distorted. Although current computational models can predict the neural or perceptual response to an electrical stimulus, an optimal stimulation strategy needs to solve the inverse problem: what is the required stimulus to produce a desired response? Here we frame this as an end-to-end optimization problem, where a deep neural network encoder is trained to invert a psychophysically validated phosphene model that predicts phosphene appearance as a function of stimulus amplitude, frequency, and pulse duration. As a proof of concept, we show that our strategy can produce high-fidelity, patient-specific stimuli representing handwritten digits and segmented images of everyday objects that drastically outperform conventional encoding strategies by relying on smaller stimulus amplitudes at the expense of higher frequencies and longer pulse durations. Overall, this work is an important first step towards improving visual outcomes in visual prosthesis users across a wide range of stimuli.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call