Abstract

Visual data overload and associated performance breakdowns in safety-critical environments have triggered a significant interest in developing multimodal displays, i.e., displays that distribute information across multiple sensory channels (mainly vision, hearing, and touch). Yet, more than 95% of studies on multimodal information processing suffer from a methodological shortcoming: the failure to perform ‘crossmodal matching’ where participants equate the perceived intensities of stimuli across sensory channels in advance of an experiment, with the goal to avoid confounding modality with salience. Currently, there is no agreed-upon technique for performing this task, and the few studies that included this step employed different methods. The goal of this study is to compare three crossmodal matching techniques to determine whether they result in useful and congruent outcomes. In particular, the degree of intra-individual variability of crossmodal matches is of interest. Eighteen participants performed a series of 54 crossmodal matches for visual, auditory, and tactile stimuli. They used a mouse and visual sliding scale, keyboard arrows, or a rotary knob to adjust intensity. Intra-individual variability of matches differed significantly as a function of matching technique and the order in which stimuli are presented. These findings confirm the need for developing an agreed-upon reliable crossmodal matching technique for use in future studies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call