Abstract

From the advent of electronic music, and even from early organ consoles and other remote manipulated instruments, much of the design and research of new musical interfaces has focused on abstracting (both physically and in the design process) the “controller” from the “synthesizer” and then investigating how to best interface those two classes of hardware with each other and the player. Yet, many of the striking lessons from our history of intimate expressive musical instruments lie in the blurred boundaries between player, controller, and sound producing object. Bowed strings, winds, and certainly the human voice all blur these boundaries, both in the design and construction of the “instrument” and in the resulting controls and expressions. This article looks at some of the issues involved with creating new expressive electronic musical instruments, and presents an overview of a number of recent projects in the co-design of musical controllers and computer sound synthesis algorithms. Specific cases are described where the traditional engineering approach of building a controller (a box), and connecting it to a synthesizer (another box) would never have yielded the final product that resulted from the tightly coupled development of a complete musical system all at the same time. Examples are given of where a discovery from synthesis algorithm development suggested a new control metaphor, and where a control component suggested a new aspect of synthesis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call