Abstract

Acoustical studies of sound production in bowed string instruments show that there is a direct relationship between bowing controls and the sound produced. During note sustains the three major bowing parameters that determine the characteristics of the sound are the bowing force (bforce), the bowing distance to the bridge (bbd) and the bowing velocity (bvel). During note attacks the third parameter is acceleration (bacc) rather than velocity. We are interested in understanding this correspondence between bowing controls and sound but in the opposite direction, i.e., mapping sound features extracted from an audio recording to the original bowing controls used. This inverse process is usually called indirect acquisition of gestures and is of great interest in different fields ranging from acoustics and sound synthesis to motor learning or augmented performances. Indirect acquisition is based on the processing of the audio signal and it is usually informed on acoustical or physical properties of the sound or sound production mechanism. In this paper, we present indirect acquisition methods of violin controls from an audio signal based on training of statistical models with a database of multimodal data (control and sound) from violin performances.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call