Abstract

Previous work in the literature has shown that, using a local representation of the alphabet, simple recurrent neural networks were able to estimate the probability distribution corresponding to strings which belong to a stochastic regular language. This paper carries on with the empirical works in the matter by including input time delays in simple recurrent networks. This technique could sometimes avoid the use of fully-recurrent architectures (with high computational requirements) to learn certain grammars. Therefore, we could avoid the problems of memory that arise using networks with simple recurrences.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call