Abstract

Professional musicians manipulate sound properties such as pitch, timing, amplitude, and timbre in order to add expression to their performances. However, there is little quantitative information about how and in which contexts this manipulation occurs. In this chapter, we describe an approach to quantitatively model and analyze expression in popular music monophonic performances, as well as identifying interpreters from their playing styles. The approach consists of (1) applying sound analysis techniques based on spectral models to real audio performances for extracting both inter-note and intra-note expressive features, and (2) based on these features, training computational models characterizing different aspects of expressive performance using machine learning techniques. The obtained models are applied to the analysis and synthesis of expressive performances as well as to automatic performer identification. We present results, which indicate that the features extracted contain sufficient information, and the explored machine learning methods are capable of learning patterns that characterize expressive music performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.