Abstract

In the past decade, research in Music Information Retrieval (MIR) has created a wealth of methods to extract latent musical information from the audio signal. While these methods are capable to infer acoustic similarities between music pieces, to reveal a song’s structure, or to identify a piece from a noisy recording, they cannot capture semantic information that is not encoded in the audio signal, but is nonetheless essential to many listeners. For instance, the meaning of a song’s lyrics, the background of a singer, or thework’s historical context cannot be derived without additional meta-data. Such semantic information on music items, however, can be derived from other sources, including the web and social media, in particular services dedicated to the music domain. These sources typically offer a wide variety of multimedia data, including user-generated content, usage data, text, audio, video, and images. On the other hand, using the newly available sources of semantically meaningful information also poses new challenges. Among others, dealing with the massive amounts of information and the noisiness of this kind of data for example, introduced by various user biases, or injection of spurious information.This also calls for novel methods for user-centric evaluation of music retrieval systems. Given the strengths and shortcomings inherent to both content-based and context-based approaches, hybrid methods that intelligently combine the two are essential. Such

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call