Abstract

Information obtained from multiple sensory modalities, such as vision and touch, is integrated to yield a holistic percept. As a haptic approach usually involves cross-modal sensory experiences, it is necessary to develop an apparatus that can characterize how a biological system integrates visual-tactile sensory information as well as how a robotic device infers object information emanating from both vision and touch. In the present study, we develop a novel visual-tactile cross-modal integration stimulator that consists of an LED panel to present visual stimuli and a tactile stimulator with three degrees of freedom that can present tactile motion stimuli with arbitrary motion direction, speed, and indentation depth in the skin. The apparatus can present cross-modal stimuli in which the spatial locations of visual and tactile stimulations are perfectly aligned. We presented visual-tactile stimuli in which the visual and tactile directions were either congruent or incongruent, and human observers reported the perceived visual direction of motion. Results showed that perceived direction of visual motion can be biased by the direction of tactile motion when visual signals are weakened. The results also showed that the visual-tactile motion integration follows the rule of temporal congruency of multi-modal inputs, a fundamental property known for cross-modal integration.

Highlights

  • For a biological system, perception often requires information emanating from sensors of multiple modalities, such as vision, touch and audition [1,2,3]

  • In the visual only condition, we found that the probability of choosing the veridical direction of visual motion peaked at zero noise, monotonically decreased as noise levels increased, and reached chance level at the maximum level of visual noise (Figure 4(A), green trace for data obtained from one subject; Figure 4(B), green trace for data averaged across subjects)

  • Results indicate that the perceived visual direction is biased toward the tactile direction especially when visual noise level is high, providing evidence of visual-tactile integration

Read more

Summary

Introduction

Perception often requires information emanating from sensors of multiple modalities, such as vision, touch and audition [1,2,3]. The interaction between audition and vision determines perceived speech [4] and perceived timing of collision [5]. Touch and vision are similar in that both sensory signals derive from a sheet of sensor arrays, cutaneous receptors in the skin and photoreceptors in the retina. Touch and vision are intuitively integrated to yield a holistic percept of the environment around us [8,9]. It has been hypothesized that cross-modal integration is processed in cortical regions that receive both visual and tactile signals [10,11,12] and it is of interested to understand where and how in the brain this occurs

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.