Abstract

The automatic recognition of continuous English speech, as against isolated words, must contend with phonological processes at word boundaries that can distort the mapping between surface form and lexical entry. Past recognition systems have used generative phonological rules in a precompilation stage to expand the working dictionary. However, this greatly increases the bulk and complexity of the lexicon. It is also possible to use analytic rules to undo the putative effect of phonological processes at the time of recognition. However, this can lead to the postulation of nonwords and to slower processing times. A solution that blends the advantages of both approaches, with the disadvantages of neither, is to use finite state transducers (FSTs) as filters on permitted matchings between input strings and lexical entries. These were implemented in a recognition system, and found to increase the percentage of words recognized by three percentage points, at a cost of halving the signal‐to‐noise ratio, compared to the same system without phonological FSTs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call