Abstract

This paper analyses three algorithms previously studied in the computational learning theory community: the gradient descent (GD) algorithm, the exponentiated gradient algorithm with positive and negative weights (EG/spl plusmn/ algorithm) and the exponentiated gradient algorithm with unnormalised positive and negative weights (EGU/spl plusmn/ algorithm). The analysis is of the form used in the signal processing community and is in terms of the mean square error. A relationship between the learning rate and the mean squared error (MSE) of predictions is found for the family of algorithms. Trials involving simulated acoustic echo cancellation are conducted whereby learning rates for the algorithms are selected such that they converge to the same steady state MSE. These trials demonstrate that, in the case that the target is sparse, the EG/spl plusmn/ algorithm typically converges more quickly than the GD or EGU/spl plusmn/ algorithms which perform very similarly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call