Abstract

Recently, several neural algorithms have been introduced for the problem of source separation or independent component analysis. In this paper we approach the problem from the point of view of a single neuron. Two simple learning rules are presented as examples of a more general class of algorithms. The first rule learns to separate an independent component which has a negative kurtosis, and the second rule separates a component with a positive kurtosis. The learning rules are stochastic gradient descent that result in Hebbian learning with very simple constraint terms. The convergence of the learning rules can be rigorously proven without any unnecessary hypotheses on the distributions of independent components. Simulations confirm the validity of the algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call