Abstract

AbstractAugmented Reality (AR) involves the combination of synthetic and real stimuli, not being restricted to visual cues. For the inclusion of computer-generated sound in AR environments, it is often assumed that the distance attenuation model is the most intuitive and useful system for all users, regardless of the characteristics of the environment. This model reduces the gain of the sound sources as a function of the distance between the source and the listener. In this paper, we propose a different attenuation model not only based on distance, but also considering the listener orientation, so the user could listen more clearly the objects that they are looking at, instead of other near objects that could be out of their field of view and interest. We call this a directional attenuation model. To test the model, we developed an AR application that involves visual and sound stimuli to compare the traditional model versus the new one, by considering two different tasks in two AR scenarios in which sound plays an important role. A total of 38 persons participated in the experiments. The results show that the proposed model provides better workload for the two tasks, requiring less time and effort, allowing users to explore the AR environment more easily and intuitively. This demonstrates that this alternative model has the potential to be more efficient for certain applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call