Abstract

The automatic control of emotional expression in music is a challenge that is far from being solved. This paper describes research conducted with the aim of developing a system with such capabilities. The system works with standard MIDI files and develops in two stages: the first offline, the second online. In the first stage, MIDI files are partitioned in segments with uniform emotional content. These are subjected to a process of features extraction, then classified according to emotional values of valence and arousal and stored in a music base. In the second stage, segments are selected and transformed according to the desired emotion and then arranged in song-like structures. The system is using a knowledge base, grounded on empirical results of works of Music Psychology that was refined with data obtained with questionnaires; we also plan to use data obtained with other methods of emotional recognition in a near future. For the experimental setups, we prepared web-based questionnaires with musical segments of different emotional content. Each subject classified each segment after listening to it, with values for valence and arousal. The modularity, adaptability and flexibility of our system’s architecture make it applicable in various contexts like video-games, theater, films and healthcare contexts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.