Abstract

Sounds, especially in the musical context, are often presented in mixtures. Given that traditional signal processing theory is not well equipped to work with concurrent sound sources, this limits the analysis and creative options on musical audio. As it was recently shown, sound mixtures can be analyzed with latent variable models of time‐frequency distributions and, thus, reveal the additive structure of acoustic scenes. Unfortunately, such processes separate acoustic elements in an indiscriminant manner that does not always result in extracting the desired sources. However, in the case where a user can provide examples of the sources to extract, the aforementioned algorithms can become powerful tools for supervised source separation. Performing experiments on real single‐channel recordings, it was shown that by simply indicating the regions of time where a particular source was active, it is possible to extract that source with minimal distortion, even though it may constantly be part of a sound mixture. Since this is an example‐based procedure, it is able to deal with arbitrary sources, regardless of their acoustic properties, without requiring any tedious heuristic modeling. This extraction method facilitates the analysis of musical data, and also allows creative manipulation of music.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call