We aim to reprogram visual perception through an Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) Display, using a GPU renderer that rasterizes a target color image or video into cone-by-cone single-wavelength laser light pulses ("microdoses"). We imaged and tracked at ~ 2° eccentricity a 0.9° x 0.9° field of view of the retina in 840 nm. Stimulated in 543 nm, all resolved, spectrally classified cones receive microdoses of varying intensities. The renderer updates for each AOSLO frame (30 frames / sec) an underlying stimulation image buffer, encoding a desired color percept pattern that takes into account the cone locations, cone spectral sensitivity to the 543 nm stimulation light, and the corresponding color percept pixel values. Within one frame, the buffer gets pixelated strip-by-strip at 1 kHz into actual world-fixed microdose intensity values, each centered on a cone within that strip at that instant. The resulting frame of microdoses visually occupies the whole raster view. We showed multiple color percepts to a cone-classified subject, with logging data. The subject saw spatially-varying colors, e.g. a red box moving on a green canvas - these percepts validated the accuracy of the prototype. These initial prototyping experiments allude to the potential of presenting general percepts to a cone-classified subject, at cone-level accuracy in a fully programmable way. The technology allows us to probe neural plasticity and towards generation of novel percepts.
Read full abstract