Abstract

Retinal neuroprostheses are the only FDA-approved treatment option for blinding degenerative diseases. A major outstanding challenge is to develop a computational model that can accurately predict the elicited visual percepts (phosphenes) across a wide range of electrical stimuli. Here we present a phenomenological model that predicts phosphene appearance as a function of stimulus amplitude, frequency, and pulse duration. The model uses a simulated map of nerve fiber bundles in the retina to produce phosphenes with accurate brightness, size, orientation, and elongation. We validate the model on psychophysical data from two independent studies, showing that it generalizes well to new data, even with different stimuli and on different electrodes. Whereas previous models focused on either spatial or temporal aspects of the elicited phosphenes in isolation, we describe a more comprehensive approach that is able to account for many reported visual effects. The model is designed to be flexible and extensible, and can be fit to data from a specific user. Overall this work is an important first step towards predicting visual outcomes in retinal prosthesis users across a wide range of stimuli.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.