Abstract

Three robot studies on visual prediction are presented. In all of them, a visual forward model is used, which predicts the visual consequences of saccade-like camera movements. This forward model works by remapping visual information between the pre- and postsaccadic retinal images; at an abstract modeling level, this process is closely related to neurons whose visual receptive fields shift in anticipation of saccades. In the robot studies, predictive remapping is used (1) in the context of saccade adaptation, to reidentify target objects after saccades are carried out; (2) for a model of grasping, in which both fixated and non-fixated target objects are processed by the same foveal mechanism; and (3) in a computational architecture for mental imagery, which generates “gripper appearances” internally without real sensory inflow. The robotic experiments and their underlying computational models are discussed with regard to predictive remapping in the brain, transsaccadic memory, and attention. The results confirm that visual prediction is a mechanism that has to be considered in the design of artificial cognitive agents and the modeling of information processing in the human visual system.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.