Abstract

Commonly music has an obvious hierarchical structure, especially for the singing parts which usually act as the main melody in pop songs. However, most of the current singing annotation datasets only record symbolic information of music notes, ignoring the structure of music. In this paper, we propose a hierarchical singing annotation dataset that consists of 68 pop songs from Youtube. This dataset records the onset/offset time, pitch, duration, and lyric of each musical note in an enhanced LyRiCs (LRC) format to present the hierarchical structure of music. We annotate each song in a two-stage process: first, create initial labels with the corresponding musical notation and lyrics file; second, manually calibrate these labels referring to the raw audio. We mainly validate the labeling accuracy of the proposed dataset by comparing it with an automatic singing transcription (AST) dataset. The result indicates that the proposed dataset reaches the labeling accuracy of AST datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call