Abstract

Visual prostheses such as retinal implants provide bionic vision that is limited in spatial and intensity resolution. This limitation is a fundamental challenge of bionic vision as it severely truncates salient visual information. We propose to address this challenge by performing real time transformations of visual and non-visual sensor data into symbolic representations that are then rendered as low resolution vision; a concept we call Transformative Reality. For example, a depth camera allows the detection of empty ground in cluttered environments that is then visually rendered as bionic vision to enable indoor navigation. Such symbolic representations are similar to virtual content overlays used in Augmented Reality but are registered to the 3D world via the user's sense of touch. Preliminary user trials, where a head mounted display artificially constrains vision to a 25×25 grid of binary dots, suggest that Transformative Reality provides practical and significant improvements over traditional bionic vision in tasks such as indoor navigation, object localisation and people detection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.