Abstract

Expression communication is the added value of a musical performance. It is part of the reason why music is interesting to listen to and sounds alive. Previous work on the analysis of acoustical features yielded relevant features for the recognition of different expressive intentions, inspired both by emotional and sensorial adjectives. In this article, machine learning techniques are employed to understand how expressive performances represented by the selected features are clustered on a low-dimensional space, and to define a measure of acoustical similarity. Being that expressive intentions are similar according to the features used for the recognition, and since recognition implies subjective evaluation, we hypothesized that performances are similar also from a perceptual point of view. We then compared and integrated the clustering of acoustical features with the results of two listening experiments. A first experiment aims at verifying whether subjects can distinguish different categories of expressive intentions, and a second experiment aims at understanding which expressions are perceptually clustered together in order to derive common evaluation criteria adopted by listeners, and to obtain the perceptual organization of affective and sensorial expressive intentions. An interpretation of the resulting spatial representation based on action is proposed and discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.