Abstract

This paper describes a multi-parametric user interface based around the Musical Instrument Digital Interface (MIDI) Creator system developed at York which provides MIDI data in response to changing pressures on five strain gauge sensors to control the fundamental frequency, first three formants and the overall amplitude of synthesized speech. Vocal synthesis is achieved by means of a freely available time domain formant synthesis system running on a standard PC compatible machine. The result is a novel hand-controlled speech synthesizer which is not command:phoneme based, but is rather more like a continually controlled musical instrument where the speech sounds are shaped in real-time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call