Abstract

Raga and tonic are the basic elements based on which melody is constructed in Carnatic music. Raga is the framework for building melody where as tonic frequency establishes the base and a swara is identified (‘R’ or ‘G’ etc.) based on that base frequency. In this work, an effort has been made to identify raga and tonic of a given piece of Carnatic music. The proposed method is divided into two phases. In the first phase, tonic and raga have been determined independently using the features extracted from pitch histogram. In the second phase, raga and tonic are updated iteratively using the derived note information. In this work, raga will be recognised based on the features extracted from probability density function (pdf) of pitch values extracted from the music clip. The raga identification is performed using different classifiers such as feedforward neural network model, Gaussian Mixture Models and decision trees. A mathematical model based on the parameters of pitch pdf is proposed for tonic identification. Proposed raga and tonic identification system is evaluated on two datasets: 213 music clips from 14 ragas and CompMusic data-set (538 clips from 17 ragas). For first data-set, the average accuracy of raga and tonic identification is found to be 90.14 and 94.83%, respectively. With CompMusic data-set, an average accuracy of 95% is achieved for raga identification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call