Abstract

Most feature extraction algorithms for music audio signals use Fourier transforms to obtain coefficients that describe specific aspects of music information within the sound spectrum, such as the timbral texture, tonal texture and rhythmic activity. In this paper, we introduce a new method for extracting features related to the rhythmic activity of music signals using the topological properties of a graph constructed from an audio signal. We map the local standard deviation of a music signal to a visibility graph and calculate the modularity (Q), the number of communities (Nc), the average degree (〈k〉), and the density (Δ) of this graph. By applying this procedure to each signal in a database of various musical genres, we detected the existence of a hierarchy of rhythmic self-similarities between musical styles given by these four network properties. Using Q, Nc, 〈k〉 and Δ as input attributes in a classification experiment based on supervised artificial neural networks, we obtained an accuracy higher than or equal to the beat histogram in 70% of the musical genre pairs, using only four features from the networks. Finally, when performing the attribute selection test with Q, Nc, 〈k〉 and Δ, along with the main signal processing field descriptors, we found that the four network properties were among the top-ranking positions given by this test.

Highlights

  • The mean value is higher than the value obtained with the Audio Signal Visibility Descriptor (ASVD), when we compare the rate of instances correctly classified by musical genre using just the ASVD and that using just the beat histogram, we found that the ASVD achieves, for seven of the ten genres, hit rates that are higher than or equal to those achieved by the classification using the beat histogram (Fig 9)

  • We introduced the Audio Signal Visibility Descriptor (ASVD) as a new way to extract features in audio signals for the classification of musical genres using network properties rather than Fourier transform-based algorithms

  • We showed that the visibility graphs constructed from audio signals revealed, through graphical representation (Fig 6) of the ASVD parameters (Tables 1 and 2), distinct PIH features associated with the rhythmic activity of musical styles

Read more

Summary

Introduction

We map the local standard deviation of a music signal to a visibility graph and calculate the modularity (Q), the number of communities (Nc), the average degree (hki), and the density (Δ) of this graph. We set up an attribute vector by joining the ASVD with 18 ASPD (Audio Signal Processing Descriptors: 13 MFCCs, Spectral Flux, Zero Crossing Rate, Onset Rate, Loudness and Dynamic Complexity).

Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.