Abstract

This article includes two sections that address the relevance of codes (verbal-spatial) and modalities (auditory-visual) in the multiple-resource model to the prediction of task interference. The first section describes an experiment in which either verbal or spatial decision tasks, responded to with either voice or keypress, were time-shared with second-order tracking. Decision problem difficulty was manipulated, and subjective workload as well as performance measures were assessed. The results provided support for the importance of the dichotomy between verbal and spatial processing codes in accounting for task interference. Interference with tracking was consistently greater and difficulty/performance trade-offs were stronger when the spatial decision task was performed and the manual response was used. The second section reviews literature on the interference between a continuous visual task and a discrete task whose modality is either auditory or visual. The review suggests that scanning produces a dominant cost to intramodal configurations when visual channels are separated in space; when visual separation is eliminated, however, the differences between cross-modal and intramodal performance may be best accounted for by a mechanism of preemption. Discrete auditory stimuli preempt the processing of a continuous visual task, facilitating their own processing at the expense of the continuous task. Such preemption does not occur when visual discrete and continuous tasks are time-shared.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call