Abstract

In this paper, a vocal melody extraction method based on graph modeling is proposed. First, constant-Q transform of mixed audio signal is applied. Then, amplitude spectra of several adjacent frames are concatenated together to construct the input feature. Afterwards, an undirected graph is constructed to model the melody extraction issue, and the frame-wise melodic pitches are estimated by a graph convolutional network (GCN), where the pitch estimation issue is regarded as a multi-class classification problem. The frequency bins are viewed as nodes and the underlying connection relationships of the frequency bins are defined as edges. Finally, the quantized frame-wise pitches are fine-tuned according to the salience function defined at a certain range of the smoothed melody trajectory based on the pitches estimated by GCN. The proposed method addresses the vocal melody extraction issue in an explainable way where the edges are defined according to the underlying connection relationships of different frequency bins. Experimental results demonstrate that the proposed method achieves good performance with light weight parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call