Abstract

A system for recognising continuously spoken sentences is presented. The architecture of the recogniser is divided into two stages. The first contains neural networks for transformation of ordinary signal processing cepstral parameters into a set of continuously valued acoustic-phonetic features for each frame of the speech signal and for its division into acoustic phonetic segments. The output from the first stage, which is a combination of the results of segmentation and acoustic-phonetic segments, gives a first estimate of the sequence of phonemes. The second stage contains an expert system consisting of allophonic rules, a lexicon of transcriptions of the words belonging to the selected application vocabulary, syntax rules and the overall control structure. In this stage the incoming string of phonemes is processed using the lexicon and an island-driven parsing system. The current vocabulary consists of 35 words given by a CAD-like application.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call