Abstract

In virtual auditory environments, a spatialized sound source is typically simulated in two stages: first a dry monophonic signal is recorded or synthesized, and then spatial attributes (directivity, width and position) are applied by specific signal processing algorithms. In this paper, a unified analysis/spatialization/synthesis system is presented. It is based on the spectral modeling framework that analyses/synthesizes sounds as a combination of time‐varying sinusoidal, noisy and transient contributions. The proposed system takes advantage of this representation to allow intrinsic parametric sound transformations, such as spatial distribution of sinusoids or diffusion of the noisy contribution around the listener. It integrates timbre and spatial parameters at the same level of sound generation, so as to enhance control capability and computational performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call