Abstract

With an increasing number of people working with large music archives, advanced methods for automatic labeling and organization of music collections are required as these archives grow in size. Manual annotation and categorization is not feasible for massive collections of music. In the research domain of music information retrieval (MIR) a number of algorithms for the content-based description of music were developed, which perform the extraction of relevant features for the computation of similarity between pieces of music. This fundamental step enables a great range of applications for music retrieval and organization. With supervised machine learning, music can be classified into different kinds of categories, such as genres, artists or moods. Using unsupervised approaches such as the self-organizing map music can be clustered by similar style and visualized in a way that enables direct retrieval of similar music at a glance. In this chapter, we will review the most common audio feature extraction techniques, which serve as a basis for subsequent classification and clustering tasks. As an example, we will show how music is classified into a set of genres and how genre classification can be used for benchmarking. Moreover, the creation of the so-called “music maps” and their various visualizations is demonstrated, and an interactive application called “PlaySOM” is presented, with an interface which allows direct access to similar sounding pieces in a large music collection. Its mobile counterpart “PocketSOMPlayer” allows direct playback from a music map on a mobile device without having to browse lists. Both allow the convenient interactive creation of situation-based playlists.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call