In everyday behavior there are multiple competing demands for gaze; for instance, walking along a sidewalk requires paying attention to the path while avoiding other pedestrians. Therefore, humans make numerous fixations to satisfy their behavioral goals. One attempt to explore suitable strategies of gaze allocation for natural situations was made by Sprague, Ballard and Robinson (2007), whose computational model predicts human visuo-motor behavior based on intrinsic reward and uncertainty. Their model presupposes that specific visual information is acquired from the currently fixated object only. However, evidence that peripheral objects affect gaze position by drawing it towards the center of gravity of target ensembles (Findlay, 1982; Vishwanath & Kowler, 2003) challenges this premise. Since the influence of peripheral information for natural vision remains largely unexplored, we investigated whether gaze targeting is biased towards peripheral objects in naturalistic tasks. Using a Virtual Reality environment, the fixations of 12 participants were examined while they walked through a virtual room with objects of two different colors that were designated targets or obstacles. The subjects were instructed to either collect targets, to avoid obstacles or to do both tasks simultaneously. For situations in which one of two visible objects was fixated, subjects' gaze positions were biased more often towards the non-fixated object (72.6%) than away from it (27.4%). Moreover, the gaze position tended to be drawn more frequently towards the neighboring object when this neighbor was relevant to the current task than when it was task-irrelevant. These results indicate that information from peripheral objects affect human gaze targeting in natural vision. Furthermore, the effect of the neighbor's task-relevance - and therefore of intrinsic reward - suggests that in a given fixation subjects might gather information from the peripheral visual field to accomplish a current set of goals at once. Meeting abstract presented at VSS 2015. Language: en
Read full abstract