Abstract

Hands have evolved as specialised effectors capable of both fine-tuned and gross motor actions. Thus, the location and functional capabilities of hands are important to defining which visual objects are action-relevant from the multitude of visual information in our environment. Visuospatial attention plays a critical role in the processing of such inputs. The aim of the present thesis was to investigate how internal representation of the hands and the actions we aim to complete with them, impacts visuospatial attention near the body. In study 1, I investigated how visuospatial attention contributes to luminance contrast sensitivity and object dimension judgements near hands. Targets were presented either briefly (43ms) or for a duration sufficient to facilitate shifts in covert visuospatial attention prior to target offset (250ms). Observers detected onset of visual objects of varying luminance contrasts (Experiment 1) and discriminated the dimension in which rectangles of varying aspect ratios were largest: width or height (Experiment 2) with hands adjacent to or distant from the display. In Experiment 1, for low-contrast stimuli, there was greater accuracy when detecting targets presented for 250ms versus 43ms. The opposite was true for high contrast stimuli: there was greater accuracy when detecting targets presented for 43ms versus 250ms and hand proximity did not modulate either of these effects. For Experiment 2, 250ms target presentations resulted in reductions of the vertical bias in aspect ratio judgements and improvements in visual sensitivity when hands were adjacent versus distant from the monitor. Visual sensitivity for the hand-adjacent posture was also greater for 250ms compared with 43ms target durations indicating enhanced object dimension precision for near-hand objects following shifts in visuospatial attention. In study 2, I examined how internal representation of the hands (handedness and grasping affordances) influences the distribution of visuospatial attention in peripersonal space. Left and right handed participants completed a covert visual cueing task, responding with either their dominant or non-dominant hand (Experiment 1), with the non-response hand adjacent to one of two target placeholders (and the other responding) either aligned with the shoulder (Experiment 2) or crossed over the body midline in the opposite region of hemispace (Experiment 3). In blocked trials targets appeared near the grasping (palmar) or non-grasping (back-of-hand) region of the hand. Experiment 1 found no evidence for visuospatial biases associated with handedness or response hand laterality. In Experiment 2, right-handers showed a larger attentional cueing cost for objects near the grasping surface versus non-grasping surface of their dominant hand suggesting that visuospatial attention is engaged more rapidly and disengaged more 3 slowly to objects near the graspable (versus non-graspable) space. Moreover, only hand proximity biases remained when hands were crossed over the body midline (Experiment 3) and were not evident for left-handers. This indicates that visuospatial biases are specific to the functional properties of hands, and to the strength of the underlying representation of the hand. Finally, in study 3 I investigated the impact of action goals on the distribution of near-body visuospatial attention (Experiment 1) and how the temporal relationship between the non-task relevant visual distractors and targets modifies this (Experiment 2). Following the illumination of either a left or right target light emitting diodes (LED), participants reached to point-to or grasp target objects. Coincident with target onset, a distractor LED illuminated in either the same or opposite visual hemispace halfway between the initiation point and target, or no distractor appeared. In Experiment 1, during grasp reaches there were greater temporal distractor interference effects (slower reach initiation and greater trajectory deviations along the x-axis) compared with point reaches. In Experiment 2, distractor onset was either 200ms prior to (-200ms), coincident with (0ms) or 200ms (+200ms) following the target onset. For both point and grasp actions -200ms distractors resulted in greater interference effects compared with 0ms and +200ms. For grasp reaches +200ms distractors resulted in larger interference effects compared with 0ms and -200ms distractors were associated with more deviated reach compared with coincident and +200ms for pointing actions. Grasp reaches also displayed greater trajectory deviation for -200ms distractors compared with coincident conditions. These findings indicate that grasping remaps the distribution of visuospatial attention such that non-target objects within in the frame of action are prioritised more so than when pointing. Moreover perceptual uncertainty regarding the layout of actable space influences grasping reach trajectories more so than pointing reaches. The current thesis presents evidence that near-body visual perception is contributed to by a hierarchy of attentional biases associated with functional representation of hands and manual action goals. The results show that near-body visuospatial attention is driven in a bottom-up manner relative to the location and functional properties of hands. Importantly they also provide evidence for concurrent top-down modulations of near-body visuospatial attention, relative to manual action goals which update action to accommodate changes in the visual environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.