Abstract

Simple recurrent neural networks (SRNs) have been advocated as an alternative to traditional probabilistic models for grammatical inference and language modeling. However, unlike hidden Markov Models and stochastic grammars, SRNs are not formulated explicitly as probability models, in that they do not provide their predictions in the form of a probability distribution over the alphabet. In this paper, we introduce a stochastic variant of the SRN. This new variant makes explicit the functional description of how the SRN solution reflects the target structure generating the training sequence. We explore the links between the stochastic version of SRNs and traditional grammatical inference models. We show that the stochastic single-layer SRN can be seen as a generalized hidden Markov model or a probabilistic automaton. The two-layer stochastic SRN can be interpreted as a probabilistic machine whose state-transitions are triggered by inputs producing outputs, that is, a probabilistic finite-state sequential transducer. It can also be thought of as a hidden Markov model with two alphabets, each with its own distinct output distribution. We provide efficient procedures based on the forward-backward approach, used in the context of hidden Markov models, to evaluate the various probabilities occurring in the model. We derive a gradient-based algorithm for finding the parameters of the network that maximize the likelihood of the training sequences. Finally, we show that if the target structure generating the training sequences is unifilar, then the trained stochastic SRN behaves deterministically.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.