Abstract
Abstract It has been previously shown by Kohonen and others how a linear system can be trained to associate two deterministic pattern sets, with applications to learning algorithms and adaptive systems. When noise is added to the training patterns, the algorithm is close to stochastic approximation algorithms with constant gain. Some results are given here concerning the convergence of the algorithm with different assumptions on the training pattern statistics and the gain, and a stochastic counterpart is given to the basic deterministic algorithm in case an exact association exists. It is established how the non-zero residual error, resulting from a gain sequence not tending to zero, can be made arbitrarily small using redundancy. It is also shown how the optimal associative mapping in the noisy case is related to the basic mapping reported earlier.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.