Abstract

One of the greatest challenges in the development of binaural machine audition systems is the disambiguation between front and back audio sources, particularly in complex spatial audio scenes. The goal of this work was to develop a method for discriminating between front and back located ensembles in binaural recordings of music. To this end, 22, 496 binaural excerpts, representing either front or back located ensembles, were synthesized by convolving multi-track music recordings with 74 sets of head-related transfer functions (HRTF). The discrimination method was developed based on the traditional approach, involving hand-engineering of features, as well as using a deep learning technique incorporating the convolutional neural network (CNN). According to the results obtained under HRTF-dependent test conditions, CNN showed a very high discrimination accuracy (99.4%), slightly outperforming the traditional method. However, under the HRTF-independent test scenario, CNN performed worse than the traditional algorithm, highlighting the importance of testing the algorithms under HRTF-independent conditions and indicating that the traditional method might be more generalizable than CNN. A minimum of 20 HRTFs are required to achieve a satisfactory generalization performance for the traditional algorithm and 30 HRTFs for CNN. The minimum duration of audio excerpts required by both the traditional and CNN-based methods was assessed as 3 s. Feature importance analysis, based on a gradient attribution mapping technique, revealed that for both the traditional and the deep learning methods, a frequency band between 5 and 6 kHz is particularly important in terms of the discrimination between front and back ensemble locations. Linear-frequency cepstral coefficients, interaural level differences, and audio bandwidth were identified as the key descriptors facilitating the discrimination process using the traditional approach.

Highlights

  • The renewed and still increasing popularity of binaural technologies, seen over the past decade, promotes the creation of large repositories of binaural audio or audiovisual recordings

  • An example confusion matrix obtained using a combination of all four groups of front-back cues (DB, Langendijk and Bronkhorst [20] (LB), Hebrank and Wright [17] (HW), boosted bands (BB)) and Logistic regression (Logit) classifier is presented in Fig. 5a (For consistency, all the confusion matrices presented in Fig. 5 were obtained using Logit classifier)

  • Both Mel-frequency cepstral coefficient (MFCC) and Linear-frequency cepstral coefficient (LFCC) appear to constitute useful features allowing for the discrimination between front and back located ensembles with the accuracy ranging from 89 to 93%, slightly outperforming the cues discussed in the previous section

Read more

Summary

Introduction

The renewed and still increasing popularity of binaural technologies, seen over the past decade, promotes the creation of large repositories of binaural audio or audiovisual recordings This tendency might give rise to currently unknown challenges in semantic search and retrieval of such recordings in terms of their “spatial information.”. Most of the studies in the area of spatial audio information retrieval intended for binaural signals aim to localize “individual” audio sources, while the attempts to characterize the location, depth, or width of the “ensembles” of sources are scarce. The task of front-back discrimination is even more challenging in the case of complex spatial audio scenes considered in this study, with many simultaneous sound-emitting sources (such as in music ensembles), due to confounding of binaural cues reaching the artificial ears. Some studies support the view that there exist some universal macroscopic spectral regions responsible for the front-back disambiguation [15–19], whereas some others conclude that there are no generic spectral cues but it is the listener-specific spectral cues that help to discriminate front and back sources [20, 21]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call