Abstract

Each partner in a collaborative music performance must also be an expert listener. Several “performers” have been built for a variety of musical situations, and each given a listening skill that permits it to substitute for a human performer. This typically entails real-time pitch detection of an instrument or voice, matching the events heard against an encoded score, then adding its own part in synchronization. Necessary skills include ultra-sensitivity to changes of tempo, and the ability to improve by learning from rehersals. This has worked for a broad class of performance situations: Baroque flute/harpsichord, 20th century violin/piano, and some jazz standards—in all cases, music that can be known from a score or by acoustic repetition. Some attention has also been given to how humans first negotiate unfamiliar acoustic territory, such as perceiving rhythmic structure in a piece not heard before. A real-time auditory model of rhythmic perception and cognition (running on a desktop machine with audio I/O) suggests that the neuronal processing involved in foot-tapping to unfamiliar music may be relatively inexpensive as human tasks go.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call