Abstract

One of the major paradigms for unsupervised learning in artificial neural networks is Hebbian learning. The standard implementations of Hebbian learning are optimal under the assumptions of i.i.d. Gaussian noise in a data set. We derive ε-insensitive Hebbian learning based on minimising the least absolute error in a compressed data set and show that the learning rule is equivalent to the principal component analysis (PCA) networks’ learning rules under a variety of conditions. We then show that the extension of the ε-insensitive rules to nonlinear PCA (NLPCA) gives learning rules which find the independent components of a data set. Finally, we shown that using ε-insensitive minor components analysis is extremely effective at noise removal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call