Abstract

Thanks to the development of deep learning, various sound source separation networks have been proposed and made significant progress. However, the study on the underlying separation mechanisms is still in its infancy. In this study, deep networks are explained from the perspective of auditory perception mechanisms. For separating two arbitrary sound sources from monaural recordings, three different networks with different parameters are trained and achieve excellent performances. The networks’ output can obtain an average scale-invariant signal-to-distortion ratio improvement (SI-SDRi) higher than 10 dB, comparable with the human performance to separate natural sources. More importantly, the most intuitive principle—proximity—is explored through simultaneous and sequential organization experiments. Results show that regardless of network structures and parameters, the proximity principle is learned spontaneously by all networks. If components are proximate in frequency or time, they are not easily separated by networks. Moreover, the frequency resolution at low frequencies is better than at high frequencies. These behavior characteristics of all three networks are highly consistent with those of the human auditory system, which implies that the learned proximity principle is not accidental, but the optimal strategy selected by networks and humans when facing the same task. The emergence of the auditory-like separation mechanisms provides the possibility to develop a universal system that can be adapted to all sources and scenes.

Highlights

  • Published: 14 January 2022Sound source separation is an essential part of machine listening and is beneficial to many real-life audio applications

  • In our previous attempt [18] to unravel underlying separation principles learned by networks, results showed that have the proximity principle learned by ConvTasNet

  • The effects of network structure and parameters on frequency resolution are quantitatively analyzed through the proximity experiment, providing a new method to explain the “black box” of deep networks from the perspective of auditory perception mechanisms

Read more

Summary

Introduction

Published: 14 January 2022Sound source separation is an essential part of machine listening and is beneficial to many real-life audio applications. Music separation is necessary for audio information retrieval and automatic music transcription [8]. The first approach (CASA models) tends to separate sources based on auditory separation mechanisms. Models were focused on the extraction of some acoustic attributes, such as pitch [11], onset [12], and amplitude modulation [13] and were subsequently based on the proximity, similarity, or common fate of these attributes. Most of these models are biologically plausible and easy to be explained.

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call