Abstract

Event Abstract Back to Event Visual hyperacuity despite fixational eye movements: a network model Ofer Mazor1*, Yoram Burak2 and Markus Meister2 1 Harvard University, United States 2 Harvard University, Center for Brain Science, United States The retina transmits a representation of the visual environment to the brain. Recent studies have focused on the many forms of information processing that occur within the retina. Yet much less is known about how the brain interprets the raw retinal output. Here we address this question by focusing on a visual acuity task with well-characterized behavioral performance. Human observers can resolve the separation of two parallel lines with a precision many times finer than the sampling resolution of the retina (Vernier hyperacuity). This occurs in the presence of constant involuntary body and eye movements that scan the visual image over the retina with a random trajectory. The image drifts faster than the output cells of the retina (retinal ganglion cells, RGCs) can respond, yet the visual system can extract high resolution stimulus information, in this case the line separation, from the RGC population response. What must the brain do to extract this information? Using a multi-electrode array, we measured the activity of a population of mouse RGCs in response to the presentation of two parallel lines. The effects of fixational movements were simulated by drifting the pair of lines, in unison, across the retina. We also simulated ganglion cell spike trains from the human eye, based on published response models from the primate retina. We then explored strategies for estimating the line separation from the RGC population response. An optimal decoder should treat the drift trajectory as a hidden variable to be estimated along with the stimulus. This approach results in a complex decoding algorithm with large memory requirements - an unlikely computation to attribute to the brain. As an alternative, we introduce a simpler decoding algorithm that does not track the drift trajectory but can still estimate the stimulus with high accuracy. This simplified decoder uses successive time windows of the RGC response to make independent estimates of stimulus likelihood that are accumulated to form a final estimate. We found that this scheme performs the Vernier task almost as well as a comparable decoder that does track the drift trajectory. Furthermore, its precision matches human performance: The algorithm can estimate line separation to less than one quarter of one RGC receptive field for presentations lasting only 100-200 ms. Finally, we propose a simple two-layer neural network, composed of coincidence detectors and temporal integrators, that implements this computation. We show that the network performs well when the implementation details - number of cells, connectivity, timescale of coincidence detection - are consistent with the known properties of visual cortex. Thus, by closely examining the output of the retina under natural conditions, we can make quantitative proposals about cortical processing that underlies human visual performance. Conference: Computational and Systems Neuroscience 2010, Salt Lake City, UT, United States, 25 Feb - 2 Mar, 2010. Presentation Type: Poster Presentation Topic: Poster session I Citation: Mazor O, Burak Y and Meister M (2010). Visual hyperacuity despite fixational eye movements: a network model. Front. Neurosci. Conference Abstract: Computational and Systems Neuroscience 2010. doi: 10.3389/conf.fnins.2010.03.00111 Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters. The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated. Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed. For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions. Received: 22 Feb 2010; Published Online: 22 Feb 2010. * Correspondence: Ofer Mazor, Harvard University, Paris, United States, omazor@fas.harvard.edu Login Required This action requires you to be registered with Frontiers and logged in. To register or login click here. Abstract Info Abstract The Authors in Frontiers Ofer Mazor Yoram Burak Markus Meister Google Ofer Mazor Yoram Burak Markus Meister Google Scholar Ofer Mazor Yoram Burak Markus Meister PubMed Ofer Mazor Yoram Burak Markus Meister Related Article in Frontiers Google Scholar PubMed Abstract Close Back to top Javascript is disabled. Please enable Javascript in your browser settings in order to see all the content on this page.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call