Abstract

Visual prostheses can improve vision for people with severe vision loss, but low image resolution and lack of peripheral vision limit their effectiveness. To address both problems, we developed a prototype advanced video processing system with a headworn depth camera and feature detection capabilities. We used computer vision algorithms to detect landmarks representing a goal and plan a path towards the goal, while removing unnecessary distractors from the video. If the landmark fell outside the visual prosthesis's field-of-view (20 degrees central vision) but within the camera's field-of-view (70 degrees), we provided vibrational cues to the left or right temple to guide the user in pointing the camera. We evaluated an Argus II retinal prosthesis participant with significant vision loss who could not complete the task (finding a door in a large room) with either his remaining vision or his retinal prosthesis. His success rate improved to 57%, 37.5%, and 100% while requiring 52.3, 83.0, and 58.8 seconds to reach the door using only vibration feedback, retinal prosthesis with modified video, and retinal prosthesis with modified video and vibration feedback, respectively. This case study demonstrates a possible means of augmenting artificial vision. Clinical Relevance- Retinal prostheses can be enhanced by adding computer vision and non-visual cues.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call