Abstract

Despite the fundamental role played by sound in multiple virtual reality contexts, few studies have explored the perception of virtual sound source motion in the acoustic space. The goal of this study was to compare the localization of virtual moving sound sources rendered with two different spatialization techniques: Vector BaseAmplitude Panning (VBAP) and fifth-order Ambisonics (HOA), both implemented in a soundproofed room and in their most basic form (basic decoding of HOA, VBAP without spread parameter). The perception of virtual sound trajectories surrounding untrained subjects (n=23) was evaluated using a new method based on a drawing-augmented multiple-choice questionnaire. In the spherical loudspeaker array used in this study, VBAP proved to be a robust spatialization technique for sound trajectory rendering in terms of trajectory recognition and height perception. In basic-decoded HOA, subjects exhibited far more disparate trajectory recognition and height perception performances but performed better in perceiving sound source movement homogeneity.

Highlights

  • Virtual reality is increasingly used in ever more applications, from leisure pursuits to industrial contexts to education and medicine

  • We systematically address the issue of the perception of sound source trajectories rendered by Virtual Auditory Displays (VAD) using both Vector Base Amplitude Panning (VBAP) and High Order Ambisonics (HOA) techniques

  • Our study investigated the perception of sound source trajectories in the horizontal plane, using two different sound spatialization systems: Vector Base Amplitude Panning (VBAP) and High Order Ambisonics (HOA)

Read more

Summary

Introduction

Virtual reality is increasingly used in ever more applications, from leisure pursuits (e.g. video games) to industrial contexts (from assembly method prototyping [1]) to education and medicine (e.g. psychotherapy [2]). Sound plays a fundamental role in virtual reality through its capacity to convey information and to reinforce the experience of immersion [3]. Sound display has long been neglected compared with video. Create immersion through sound, using headphones [4] or loudspeakers to spatialize the sound sources [5]. These systems, known as Virtual Auditory Displays (VADs), allow the synthesis of a virtual acoustic space, creating virtual sources that can be moved freely in the acoustic space. Virtual moving sound sources delivered by VAD are already used in various research and musical applications. The question of the perceptual rendering of these sound source movements is fundamental

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call