Abstract

Automatic music-understanding technologies (automatic analysis of music signals) make possible the creation of intelligent music interfaces that enrich music experiences and open up new ways of listening to music. In the past, it was common to listen to music in a somewhat passive manner; in the future, people will be able to enjoy music in a more active manner by using music technologies. Listening to music through active interactions is called active music listening. In this keynote speech I first introduce active music listening interfaces demonstrating how end users can benefit from music-understanding technologies based on signal processing and/or machine learning. By analyzing the music structure (chorus sections), for example, the SmartMusicKIOSK interface enables people to access their favorite part of a song directly (skipping other parts) while viewing a visual representation of the song's structure. I then introduce our recent challenge of deploying such research-level music interfaces as web services open to the public. Those services augment people's understanding of music, enable music-synchronized control of computer-graphics animation and robots, and provide various bird's-eye views on a large music collection. In the future, further advances in music-understanding technologies and music interfaces based on them will make interaction between people and music even more active and enriching.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call