Abstract
This paper describes a method of producing artificial speech from a phonetic input, i.e., symbols representing the names of phonemes corresponding to a given text are fed into a machine and the acoustic waveforms of connected speech emerge. The experimental work was accomplished on an electronic computer (IBM 7090), but the scheme is simple enough to permit realization with analog hardware. The talking machine program is divided into two parts. The first part simulates a more or less conventional resonance synthesizer of the tandem variety, requiring nine control signals; buzz intensity, hiss intensity, pitch, plus the center frequencies and bandwidths of three formants. Initially, this part of the program was used alone in experiments for which the inputs were detailed specifications of the control signals derived from spectrograms and physiological data, sampled at approximately three times the phonemic rate. Results from this phase were later combined with known results in speech perception to produce the rules used by the second program. That program accepts for its input the names of phonemes punched on IBM cards and produces the control signals to drive the resonant synthesizer.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.