Abstract

Retinal Prosthesis (RP) is an approach to restore vision, using an implanted device to electrically stimulate the retina. A fundamental problem in RP is to translate the visual scene to retina neural spike patterns, mimicking the computations normally done by retina neural circuits. Towards the perspective of improved RP interventions, we propose a Computer Vision (CV) image preprocessing method based on Retinal Ganglion Cells functions and then use the method to reproduce retina output with a standard Generalized Integrate & Fire (GIF) neuron model. “Virtual Retina” simulation software is used to provide the stimulus-retina response data to train and test our model. We use a sequence of natural images as model input and show that models using the proposed CV image preprocessing outperform models using raw image intensity (interspike-interval distance 0.17 vs 0.27). This result is aligned with our hypothesis that raw image intensity is an improper image representation for Retinal Ganglion Cells response prediction.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.