Abstract

Symbolic music understanding, which refers to the understanding of music from the symbolic data (e.g., MIDI format, but not audio), covers many music applications such as genre classification, emotion classification, and music pieces matching. While good music representations are beneficial for these applications, the lack of training data hinders representation learning. Inspired by the success of pre-training models in natural language processing, in this paper, we develop MusicBERT, a large-scale pre-trained model for music understanding. To this end, we construct a large-scale symbolic music corpus that contains more than 1 million music songs. Since symbolic music contains more structural (e.g., bar, position) and diverse information (e.g., tempo, instrument, and pitch), simply adopting the pre-training techniques from NLP to symbolic music only brings marginal gains. Therefore, we design several mechanisms, including OctupleMIDI encoding and bar-level masking strategy, to enhance pre-training with symbolic music data. Experiments demonstrate the advantages of MusicBERT on four music understanding tasks, including melody completion, accompaniment suggestion, genre classification, and style classification. Ablation studies also verify the effectiveness of our designs of OctupleMIDI encoding and bar-level masking strategy in MusicBERT.

Highlights

  • Music understanding, including tasks like genre classification, emotion classification, music pieces matching, has attracted lots of attention in both academia and industry

  • We first introduce the pre-training setup for MusicBERT, and fine-tune MusicBERT on several downstream music understanding tasks to compare it with previous approaches

  • Model Configuration We pre-train two versions of MusicBERT: 1) MusicBERTsmall on the smallscale LMD dataset, which is mainly for a fair comparison with previous works on music understanding such as PiRhDy (Liang et al, 2020) and melody2vec (Hirai and Sawada, 2019), which are pre-trained on LMD; 2) MusicBERTbase on the large-scale Million MIDI Dataset (MMD) dataset, for pushing the SOTA results and showing the scalability of MusicBERT

Read more

Summary

Introduction

Music understanding, including tasks like genre classification, emotion classification, music pieces matching, has attracted lots of attention in both academia and industry. Since music songs are more structural (e.g., bar, position) and diverse (e.g., tempo, instrument, and pitch), encoding symbolic music is more complicated than natural language. The existing pianoroll-like (Ji et al, 2020) and MIDIlike (Huang and Yang, 2020; Ren et al, 2020) representations of a song are too long to be processed by pre-trained models. Due to the limits of computational resources, the length of sequences processed by a Transformer model is usually cropped to below 1,000. Such representations cannot capture sufficient information for song-level tasks.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call