Abstract

We present applications of domain theory in stochastic learning automata and in neural nets. We show that a basic probabilistic algorithm, the so-called linear reward-penalty scheme, for the binary-state stochastic learning automata can be modelled by the dynamics of an iterated function system on a probabilistic power domain and we compute the expected value of any continuous function in the learning process. We then consider a general class of, so-called forgetful, neural nets in which pattern learning takes place by a local iterative scheme, and we present a domain-theoretic framework for the distribution of synaptic couplings in these networks using the action of an iterated function system on a probabilistic power domain. We then obtain algorithms to compute the decay of the embedding strength of the stored patterns.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.