Abstract

By comparing the features of music and speech in intelligent audio signal processing, both related research fields might benefit from each other. Music and speech serve as a way for humans to express themselves. The aim of this study is to show similarities and differences between music and speech by comparing the hierarchical structures with an emphasis on rhythm. Especially examining the temporal structure of music and speech could lead to new interesting features that improve existing technology. For example utilizing rhythm in synthetic speech is still an open issue as well as rhythmic features have to be improved for music in the fields of semantic search and music similarity retrieval. Theoretical aspects of rhythm in speech and music are discussed as well as practical issues in speech and music research. To show that common approaches are inherently feasible, an algorithm for onset detection is applied to speech and musical signals.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.