Abstract

What happens if we put vision and touch into conflict? Which modality "wins"? Although several previous studies have addressed this topic, they have solely focused on integration of vision and touch for low-level object properties (such as curvature, slant, or depth). In the present study, we introduce a multimodal mixed-reality setup based on real-time hand-tracking, which was used to display real-world, haptic exploration of objects in a virtual environment through a head-mounted-display (HMD). With this setup we studied multimodal conflict situations of objects varying along higher-level, parametrically-controlled global shape properties. Participants explored these objects in both unimodal and multimodal settings with the latter including congruent and incongruent conditions and differing instructions for weighting the input modalities. Results demonstrated a surprisingly clear touch dominance throughout all experiments, which in addition was only marginally influenceable through instructions to bias their modality weighting. We also present an initial analysis of the hand-tracking patterns that illustrates the potential for our setup to investigate exploration behavior in more detail.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call