Abstract

In the brain, both neural processing dynamics as well as the perceptual interpretation of a stimulus can depend on sensory history. The underlying principle is a sensory adaptation to the statistics of the input collected over some timespan, allowing the system to tune its detectors, e.g. by better sampling the input space and adjusting the response. Here, we show how a model for adaptation in visual motion processing can be set up from first principles using a generative formulation and casting the problem of adaptation in terms of optimal estimation over time. The model leads to an online adaptation of velocity tuning curves, inducing shifts in the velocity tuning and changes in the tuning curve widths that are compatible with observations from physiological experiments on macaque MT neurons. We also show how such an adaptation leads to a greater computational efficiency by a better sampling of the velocity space, requiring less motion detectors to achieve a desired level of estimation accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call