Abstract

How to reduce the cost of convolution computation is the key to real-time rendering of multi-source scene in virtual auditory display (VAD). In this paper, a method of Principal Components Analysis of Head-related Impulse Responses (HRIRs) was used to reduce the rendering load. In this method, HRIRs at various directions were decomposed into a series of common basis functions (vectors). Therefore, HRIRs of sound sources at different directions shared a set of common basis vectors. In such way, convolution computation was only related to the number of basis vectors (NBVs) Q and was independent to the number of source. Compared to traditional method, the PCA-based method can reduce the convolution computation load when sound source number N was larger than the NBVs Q. A real-time simulation implemented by Microsoft Virtual C++ program validated this result. Then a psychoacoustic experiment was conducted to evaluate the subjective effect of Q on reconstructed HRIRs using PCA. 6 subjects with psychoacoustic experimental experiences participated in the experiment. Experimental results show that when the value of Q was larger than 10, subjects almost can't tell the difference between the HRIRs reconstructed by PCA and the original HRIRs with few exceptions. Thus, the PCA-based method is suitable for the realtime implementation of multi-source rendering in VAD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call