Abstract

We present applications of domain theory in stochastic learning automata and in neural nets. We show that a basic probabilistic algorithm, the so-called linear reward-penalty scheme, for the binary-state stochastic learning automata can be modelled by the dynamics of an iterated function system on a probabilistic power domain and we compute the expected value of any continuous function in the learning process. We then consider a general class of, so-called forgetful, neural nets in which pattern learning takes place by a local iterative scheme, and we present a domain-theoretic framework for the distribution of synaptic couplings in these networks using the action of an iterated function system on a probabilistic power domain. We then obtain algorithms to compute the decay of the embedding strength of the stored patterns.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call