Abstract

Multimodal displays, i.e., displays that distribute information across multiple sensory channels (mainly vision, hearing, and touch), have received considerable attention in recent years. To be effective, their design needs to be based on a firm understanding of how information is processed both within and across modalities. However, most studies on crossmodal information processing, to date, suffer from a methodological shortcoming: they fail to perform crossmodal matching to ensure that modality is not confounded with other stimulus properties, such as salience. One reason for this shortcoming is the fact that there is no agreed-upon crossmodal matching technique, and that existing approaches suffer from limitations. The goal of the present study is to develop and validate a more reliable crossmodal matching method that employs repeated matching. To this end, six participants were asked to use this technique and match a series of 54 modality pairings involving vision, audition, and touch. Results show that the intra-individual variability of participants’ matches was significantly less than observed in an earlier technique that involved bidirectional matching and visual feedback. The findings from this research confirm the need for improved crossmodal matching procedures and for employing them in advance of conducting experiments on multisensory information processing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call