Abstract

A system for acoustic-phonetic analysis of continuous speech is being developed to serve as part of an automatic speech understanding system. The acoustic system accepts the speech wave-form as an input and produces as output a string of phoneme-like units referred to as acoustic phonetic elements (APEL's). This paper should be considered as a progress report, since the system is still under active development. The initial phase of the acoustic analysis consists of signal processing and parameter extraction, and includes spectrum analysis via linear prediction, computation of a number of parameters of the spectrum, and fundamental frequency extraction. This is followed by a preliminary segmentation of the speech into a few broad acoustic categories and formant tracking during vowel-like segments. The next phase consists of more detailed segmentation and classification intended to meet the needs of subsequent linguistic analysis. The preliminary segmentation and segment classification yield the following categories: vowel-like sound; volume dip within vowel-like sound; fricative-like sound; stop consonants, including silence or voice bar, and associated burst. These categories are produced by a decision tree based upon energy measurements in selected frequency bands, derivatives and ratios of these measurements, a voicing detector, and a few editing rules. The more detailed classification algorithms include: 1) detection and identification of some diphthongs, semivowels, and nasals, through analysis of formant motions, positions, and amplitudes; 2) a vowel identifier, which determines three ranked choices for each vowel based on a comparison of the formant positions in the detected vowel segment to stored formant positions in a speaker-normalized vowel table; 3) a fricative identifier, which employs measurement of relative spectral energies in several bands to group the fricative segments into phoneme-like categories; 4) stop consonant classification based on the properties of the plosive burst. The above algorithms have been tested on a substantial corpus of continuous speech data. Performance results, as well as detailed descriptions of the algorithms are given.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.