Abstract

Research in the area of multimodal displays and information processing has reported several benefits of distributing information across multiple sensory channels (vision, audition, and touch, in particular). However, with few exceptions, studies on multimodal information processing involve the potential risk of confounding modality with other factors, such as salience, because no cross-modal matching is being performed prior to experiments. To date, no agreed-upon cross-modal matching method has been developed. The goal of our research is to develop and compare the feasibility and validity of various approaches. In this paper, we present the findings for one particular technique that employs cue adjustments and bidirectional matches. Six participants were asked to perform a series of 216 matching tasks for combinations of cues in vision, audition and touch. The results show that participants’ matches differed from one another, were inconsistent across trials, and were also a function of the intensity level of the initial cue. The findings from this research further highlight the need for careful matching of multimodal cues in research on multisensory information processing and will result in refinements of the proposed technique.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call