Abstract

The locations of objects in our environment constitute arguably the most important piece of information our visual system must convey to facilitate successful visually guided behaviour. However, the relevant objects are usually not point-like and do not have one unique location attribute. Relatively little is known about how the visual system represents the location of such large objects as visual processing is, both on neural and perceptual level, highly edge dominated. In this study, human observers made saccades to the centres of luminance defined squares (width 4 deg), which appeared at random locations (8 deg eccentricity). The phase structure of the square was manipulated such that the points of maximum luminance gradient at the square’s edges shifted from trial to trial. The average saccade endpoints of all subjects followed those shifts in remarkable quantitative agreement. Further experiments showed that the shifts were caused by the edge manipulations, not by changes in luminance structure near the centre of the square or outside the square. We conclude that the human visual system programs saccades to large luminance defined square objects based on edge locations derived from the points of maximum luminance gradients at the square’s edges.

Highlights

  • The visual field location of a given object is arguably the most fundamental object property that our visual system needs to represent and convey

  • While the central regions of such objects do produce some neural activity, there is no evidence of a central activity peak, which would indicate the location of the object in a straight-forward manner

  • Our results clearly indicate that saccades to large luminance-defined objects are programmed based on edge locations derived from the points of steepest luminance gradient

Read more

Summary

Introduction

The visual field location of a given object is arguably the most fundamental object property that our visual system needs to represent and convey. Object recognition in the human visual system relies heavily on location information. One is led to think that the computation of saccades to large objects is based on signals concerning the object’s edges. It is, in principle, not necessary that a single value for the location of a large object is represented anywhere in the visual system. We studied what property of a large object’s edges is used in the computation of a saccade to the object. Our results clearly indicate that saccades to large luminance-defined objects are programmed based on edge locations derived from the points of steepest luminance gradient

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.