Abstract

Virtual loudspeaker arrays can be rendered over headphones using a head-related impulse response (HRIR) dataset, but these techniques often introduce undesirable spatial and timbral artifacts that are associated with distortions in the interaural level difference (ILD) cue. Here, a new and efficient virtual acoustic rendering technique that minimizes ILD distortions is described and evaluated. The technique uses principal components analysis (PCA) performed on the time-domain minimum-phase representation of the directionally dependent HRIRs. The resulting principal components (PCs) act as virtual acoustic filters, and the PCA scores are implemented as panning gains. These panning gains map the PC filters to source locations, similar to vector-base amplitude panning (VBAP). Thus, the technique is termed principal components-based amplitude panning (PCBAP). A spatial and spectral error analysis of ILD shows similar reconstruction accuracy is obtained using PCBAP with 15 filters compared to a 256-loudspeaker VBAP array. PCBAP is also scalable to simulate large loudspeaker arrays with a fixed set of PC filters and only small increases in computational requirements. Results from psychophysical testing suggest that perceptual equivalence between a PCBAP-rendered source and a reference source rendered with the original HRIR dataset is directionally dependent but can be achieved with as few as 10–20 PCs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call