Abstract

Humans constantly combine multi-sensory spatial information to successfully interact with objects in peripersonal space. Previous studies suggest that sensory inputs of different modalities are encoded in different reference frames. In cross-modal tasks where the target and response modalities are different, it is unclear which reference frame these multiple sensory signals are transformed to for comparison. The current study used a slant perception and parallelity paradigm to explore this issue. Participants perceived (either visually or haptically) the slant of a reference board and were asked to either adjust an invisible test board by hand manipulation or to adjust a visible test board through verbal instructions to be physically parallel to the reference board. We examined the patterns of constant error and variability of unimodal and cross-modal tasks with various reference slant angles at different reference/test locations. The results revealed that rather than a mixture of the patterns of unimodal conditions, the pattern in cross-modal conditions depended almost entirely on the response modality and was not substantially affected by the target modality. Deviations in haptic response conditions could be predicted by the locations of the reference and test board, whereas the reference slant angle was an important predictor in visual response conditions.

Highlights

  • Previous studies reported that sensory transformation can incur a cost, adding bias and variability[5,6]

  • We assumed that the similarity between patterns of constant error in cross-modal conditions and unimodal conditions could reflect the RF used for comparison or the relative weighting of RFs

  • If there was integration of multiple RF signals, the change of response variability in cross-modal tasks could provide further information about the process according to the maximum likelihood principle (MLP)[35,36]

Read more

Summary

Introduction

Previous studies reported that sensory transformation can incur a cost, adding bias and variability[5,6]. A RF intermediate to an allocentric frame and a body-centred egocentric frame has been proposed for haptic parallelity tasks[27,29] Most of these studies focused on one or two kinds of modality conditions, such as haptic-only[4], visuo-only[28], visuo-haptic[26,32,33] or haptic-visual[34] tasks, and did not investigate the transformations and comparison among RFs. In the current study, we sought to expand on previous studies by using a new slant perception and parallelity paradigm for investigating cross-modal RFs. In our experiment, participants perceived (either visually or haptically) a reference board (i.e., either a visual target or a haptic target) at various slant angles, and were asked to either rotate an invisible test board by hand manipulation (haptic response) or to adjust a visible test board by giving verbal instructions to the experimenter to rotate it (i.e., response based on visual information, referred to as “visual response” hereafter) until they judged that the test board was physically parallel to the reference board. We investigated direct comparisons in unimodal conditions and RF transformations in cross-modal conditions, identifying the rules governing the processing flow of cross-modal sensory transformation and comparison

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call