Abstract

Responses to object stimuli are often faster when jutting handles are aligned with responding hands, than when they are not: handle-to-hand correspondence effects. According to a location coding account, locations of visually salient jutting parts determine the spatial coding of objects. This asymmetry then facilitates same-sided responses compared to responses on the opposite side. Alternatively, this effect has been attributed to grasping actions of the left or the right hand afforded by the handle orientation and independent of its salience (affordance activation account). Our experiments were designed to disentangle the effects of pure salience from those of affordance activations. We selected pictures of tools with one salient and non-graspable side, and one graspable and non-salient side (non-jutting handle). Two experiments were run. Each experiment had two groups of participants: one group discriminated the location of the salient side of the object stimuli; the other group discriminated the location of the graspable side of them. In Experiment 1, responses were left and right button presses; in Experiment 2, they were left and right button presses plus reach-and-grasp actions. When visual salience was removed from graspable sides, no correspondence effect was observed between their orientation and the responding hands in both the experiments. Conversely, when salience depended on non-graspable portions, a correspondence effect was produced between their orientation and the responding hand. Overt attention to graspable sides did not potentiate any grasping affordance even when participants executed grasping responses in addition to button presses. Results support the location coding account: performance was influenced by the spatial coding of visually salient properties of objects.

Highlights

  • As we interact with our environment, our efficiency in coordinating our behavior depends on the recognition of the most salient cues

  • Pellicano et al (2017b) proposed an action coding account that refined the original location coding account claiming that the spatial coding of object tools depends on a higher-level process that implies evaluation of semantic and action features, instead of lowerlevel processing of structural asymmetries in objects body

  • In the graspable instruction group— incompatible mapping block, all 18 participants reported they remapped after a few trials the incompatible relations for the graspable side to compatible relations for the goal-directed side

Read more

Summary

Introduction

As we interact with our environment, our efficiency in coordinating our behavior depends on the recognition of the most salient cues. Salience is a direct product of perceived asymmetry in the stimulus image, which renders one side of the depicted object more spatially distinctive than the other (Cho and Proctor 2011). The location of these salient portions are coded within a spatial stimulus set (e.g., left and right stimulus spatial codes) that overlaps (Kornblum et al 1990) with the spatial response set (left and right responses), setting the preconditions for the emergence of a stimulus–response (S–R) spatial correspondence effect. The handle-to-hand correspondence effect would correspond to an object-based Simon effect (Cho and Proctor 2010), that is, to the earlier evidence that stimulus location (or orientation) can influence motor responses (e.g., button presses) even when task irrelevant (Scorolli et al 2015; Pellicano et al 2010b; Vu et al 2005; see Proctor and Vu 2006 for a review)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call