Abstract
This demonstration shows a new type of front end for a Retinal Prosthesis/Vision Augmentation (RP/VA) System, as well as a Visual to Auditory Sensory Substitution Device (SSD). Each system serves to process visual scenes then present them in a simplified form (augmented with auditory signals) to assist visually impaired people. Both systems consist of three components: a sensory block to capture the visual scene, a processing block to manage the collected data and generate stimulus patterns, and an output block. Here we are presenting two possible setups. In both setups we use a “silicon retina” in the form of a Dynamic Vision Sensor (DVS) for the sensory block. In the hardware implementation, the processing block consists of a microcontroller, with an additional circuit for visual to audio conversion. The result of the visual processing is presented on an LED-matrix, while the SSD (audio) output can be heard on stereo headphones. In the second setup the processing block is an Android device, running an application called SounDVS. This solution also outputs both audio and video signals. The systems represent wearable, low-power, real-time solutions for receiving and processing video input and creating simplified outputs containing the most salient information about the visual scene.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.