Abstract

This paper presents a method for incorporating the expressivity of human performance into real-time computational audio generation for games and other immersive environments. In film, Foley artistry is widely recognised to enrich the viewer's experience, but the creativity of the Foley artist cannot be easily transferred to interactive environments where sound cannot be recorded in advance. We present new methods for human performers to control computational audio models, using a model of a squeaky door as a case study. We focus on the process of selecting control parameters and on the mapping layer between gesture and sound, referring to results from a separate user evaluation study. By recording high-level control parameters rather than audio samples, performances can be later varied to suit the details of the interactive environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call