Abstract

Music content processing is receiving more and more attention from the multimedia community. On the one hand, research developments in computer music during the last decades are conducing toward more sophisticated and novel musical applications; on the other hand, such research is useful for the development of non-musical applications embedding music content processing to enhance effectiveness and usability. Intelligent user interfaces, games and entertainment, edutainment, and rehabilitation are some examples. This paper focuses on the role of analysis and synthesis of expressive content in multimedia signals, with particular emphasis on music, integrated with human movement and visual media in the framework of intelligent interactive systems. Research on “affective computing” in the USA and on “KANSEI information processing” (KIP) in Japan are consolidated research areas dealing with artificial emotions and expressivity. We focus on a third way, where research on expressivity and artificial emotions is considered in the framework of European culture. In this direction, special emphasis is given to qualitative analysis of human movement and its relations with the music signal, starting from previous studies on movement by choreographers (Laban's “Theory of Effort”). We aim at a system that is able to distinguish between the different expressive content of two performances of the same dance fragment. Recent concrete results achieved in the last years, including applications in music theatre and in interactive exhibits for edutainment and museums, are presented briefly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call