Abstract
This paper proposes a novel algorithm for the automatic classification of song excerpts according to their meter. The proposed algorithm performs two types of analyses: a local analysis of the acoustic properties (e.g. spectrum) in the vicinity of the beat and a larger time span analysis of the acoustic properties (e.g. pitch) measured during intervals between two successive beats. Compared to existing algorithms, it pays more attention to the temporal shape of events near the beat and it introduces a pitch analysis in the inter-beat intervals. Moreover, it extracts similarity features for expressing the differences between the acoustic properties at subsequent beats and during subsequent inter-beat intervals respectively. Finally, a dedicated feature selection approach is proposed to control the training of the stochastic models that perform the final classification. An experimental validation of the new algorithm is carried out on a standard meter classification dataset and on a new dataset that is much larger and more diverse. A comparison between the proposed algorithm and three state-of-the-art algorithms shows that the proposed algorithm outperforms the state-of-the-art systems for each tested configuration. Moreover, looking at the features that were selected for the classification models, it appears that periodicity evidences derived from some of the newly introduced features are more relevant than those derived from the traditional features.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have