Abstract

A novel combination of multilayer perceptrons (MLPs) and hidden Markov models (HMMs) is presented. Instead of using MLPs as probability generators for HMMs, the authors propose to use MLPs as labelers for discrete parameter HMM's. Compared with the probabilistic interpretation of MLPs, this gives them the advantage of flexibility in system design (e.g., the use of word models instead of phonetic models while using the same MLPs). Moreover, since they do not need to reach a global minimum, they can do with MLPs with fewer hidden nodes, which can be trained faster. In addition, they do not need to retrain the MLPs with segmentations generated by a Viterbi alignment. Compared with Euclidean labeling, their method has the advantages of needing fewer HMM parameters per state and obtaining a higher recognition accuracy. Several improvements of the baseline MLP labeling are investigated. When using one MLP, the best results are obtained when giving the labels a fuzzy interpretation. It is also possible to use parallel MLPs where each is based on a different parameter set (e.g., basic parameters, their time derivatives, and their second-order time derivatives). This strategy increases the recognition results considerably. A final improvement is the training of MLPs for subphoneme classification.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call