Abstract

Supervised parameter adaptation in many artificial neural networks is largely based on an instantaneous version of gradient descent called the least-mean-square (LMS) algorithm. This paper considers only neural models which are linear with respect to their adaptable parameters and has two major contributions. First, it derives an expression for the gradient-noise covariance under the assumption that the input samples are real, stationary, Gaussian distributed but can be partially correlated. This expression relates the gradient correlation and input correlation matrices to the gradient-noise covariance and explains why the gradient noise generally correlates maximally with the steepest principal axis and minimally with the one of the smallest curvature, regardless of the magnitude of the weight error. Second, a recursive expression for the weight-error correlation matrix is derived in a straightforward manner using the gradient-noise covariance, and comparisons are drawn with the complex LMS algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call