Abstract

The perception of tool–object pairs involves understanding their action-relationships (affordances). Here, we sought to evaluate how an observer visually encodes tool–object affordances. Eye-movements were recorded as right-handed participants freely viewed static, right-handed, egocentric tool–object images across three contexts: correct (e.g. hammer-nail), incorrect (e.g. hammer-paper), spatial/ambiguous (e.g. hammer-wood), and three grasp-types: no hand, functional grasp-posture (grasp hammer-handle), non-functional/manipulative grasp-posture (grasp hammer-head). There were three areas of interests (AOI): the object (nail), the operant tool-end (hammer-head), the graspable tool-end (hammer-handle). Participants passively evaluated whether tool–object pairs were functionally correct/incorrect. Clustering of gaze scanpaths and AOI weightings grouped conditions into three distinct grasp-specific clusters, especially across correct and spatial tool–object contexts and to a lesser extent within the incorrect tool–object context. The grasp-specific gaze scanpath clusters were reasonably robust to the temporal order of gaze scanpaths. Gaze was therefore automatically primed to grasp-affordances though the task required evaluating tool–object context. Participants also primarily focused on the object and the operant tool-end and sparsely attended to the graspable tool-end, even in images with functional grasp-postures. In fact, in the absence of a grasp, the object was foveally weighted the most, indicative of a possible object-oriented action priming effect wherein the observer may be evaluating how the tool engages on the object. Unlike the functional grasp-posture, the manipulative grasp-posture caused the greatest disruption in the object-oriented priming effect, ostensibly as it does not afford tool–object action due to its non-functional interaction with the operant tool-end that actually engages with the object (e.g., hammer-head to nail). The enhanced attention towards the manipulative grasp-posture may serve to encode grasp-intent. Results here shed new light on how an observer gathers action-information when evaluating static tool–object scenes and reveal how contextual and grasp-specific affordances directly modulate visuospatial attention.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.