Abstract

Live computer music is the perfect medium for generativemusic systems, for non-linear compositional constructions and forinteractive manipulation of sound processing. Unfortunately, muchof the complexity of these real-time systems is lost on a potentialaudience, excepting those few connoisseurs who sneak round the backto check the laptop screen. An artist using powerful software likeSuperCollider or PD cannot be readily distinguished from someonechecking their e-mail whilst DJ-ing with iTunes. Without a cultureof understanding of both the laptop performer and current generation graphicaland text-programming languages for audio, audiences tend to respondmost to often gimmicky controllers, or to the tools they have had moreexposure to – the (yawn) superstar DJs and their decks.This article attempts to convey the exciting things that are beingexplored with algorithmic composition and interactive synthesistechniques in live performance. The reasons for building generativemusic systems and the forms of control attainable over algorithmicprocesses are investigated. Direct manual control is set againstthe use of autonomous software agents. In line with this, four techniquesfor software control during live performance are introduced, namelypresets, previewing, autopilot, and the powerful method of livecoding. Finally, audio-visual collaboration is discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call