Abstract

Because this symposium is being held in honor of John Pierce, I thought it would be appropriate to discuss a topic related to human-computer communication in computer music, as he has made so many major scientific contributions to the field of communication. The computer's role in music has traditionally been to produce complex output from descriptions which are simplified in some way. While the instrument (in a Music-N sense) might be a rather complex computer program, the score roughly approximates the amount of detail a traditional composer might specify to an orchestra. In recent years, computers have become available to a far larger audience which also includes performers, and of course the communication between a performer and his or her instrument is of a completely different character than the communication a composer sets down in a score. In performer-instrument communication, the instrument is not reminded of what to do by the lowbandwidth channel of a few terse marks on a page; rather, the performer is continuously engaged in control of the instrument. The detail and complexity of this control is such that it is never completely articulated, hence the use of marks on a page. A common method of controlling so-called real-time synthesizers has been to trigger sounds from a keyboard. Keyboard technique allows a certain degree of accent, phrasing, and articulation; and while many listeners can tell the difference between a real violin and a violin simulation being triggered and released by a keyboard, the important difference is not in sound but in the performer's motivation. The central issue is that it takes more effort to learn how to make a good sound on a violin than it does to trigger the start of a recording or simulation of a violin-as described by Michel Waisvisz (see Krefeld 1990). Effort is expended in the development of a relationship with an instrument over the course of time, which results in a substantial amount of complexity under fluent management by a performer. While a skill situated in one's nervous and motor systems can be referred to by marks, it can never be totally articulated, certainly not by a composer whose involvement in the musical process is to specify an arrangement of these marks. With the development of controllers such as the Mathews-Boie Radio Drum (Mathews, Boie, and Schloss 1989), we now have ways to measure some of the gestures people make when they play traditional instruments. There has, however, been little discussion of the computational architecture and resources necessary to support a situation in which the audio output might actually represent a reduction in the amount of data transmitted when compared with the gestural input. Suppose we have 16 continuous channels of control data occurring at approximately 1000 numbers per second. If we are to use these data in some musically clever way, we need to be able to perform a computation on each channel (which might range from detecting the onset of a note to updating a parameter in a synthesis algorithm) in less than 160 /.tsec. This figure assumes (unrealistically) that the CPU can spend all of its time watching and processing only this set of control data. What appears to be the case with current technology is that most of his 160-gsec interval is spent just fetching the data, either across a slow bus, or addressing a serial chip using an interrupt polling scheme. The ideal situation would be that the CPU could access these control data as easily as reading an address in memory. This is typically accomplished by direct memory access (DMA) operations, but such operations are typically rather unintelligent for musical needs. DMA was originally intended to service disk drive Computer Music Journal, Vol. 15, No. 4, Winter 1991, ? 1991 Massachusetts Institute of Technology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call