Abstract

Neural networks and hidden Markov models (HMMs) are compared. It is shown that the conventional HMMs are equivalent to linear recurrent networks (LRNs) with time varying weights. Inspired by the nonlinear nature of nodes in the neural networks, nonlinearity is introduced into the HMMs. Accordingly, a connectionist training approach is proposed to train such nonlinear HMMs. The training is discriminative when the objective function is defined as the mutual information between the observed event and the Markov model. The introduction of nonlinearity allows one to view the HMMs in a broader perspective. For instance, the normalizing of forward probability can be interpreted as a kind of nonlinearity in a nonlinear HMM. The proposed training algorithm has been tested on a speaker-dependent isolated digit recognition problem; this test demonstrated that the discriminative power of the HMMs can be enhanced. >

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call