Abstract
The synthesis of speech is discussed as one of the simpler problems of language automation While ultimately speech synthesizers will doubtless have many practical applications, their chief value at present is in basic research on the relation of speech parameters to linguistic judgments. Two basic methods of speech synthesis are considered: 1) the generation of speech from stored segments, and 2) the generation of speech through continuous control of the various speech parameters individually; in the latter case, the parameters may be physiological or acoustical. It is concluded that electronic analogues of the physiological speech mechanism provide a means of evaluating hypotheses about the physiologic – acoustic speech transformation, and that acoustical speech simulators are the most realistic and practical research tools for the experimental study of speech perception.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have