Abstract

Using the proportionate-type steepest descent algorithm we represent the current weight deviations in terms of initial weight deviations. Then we attempt to minimize the mean square output error with respect to the gains at a given instant. The corresponding optimal average gains are found using a water-filling procedure. The stochastic counterpart is obtained in two steps. First, the true weights, which are unknown, are replaced by their current estimates in the expression for the optimal average gains. Secondly, the current gains are calculated based on the difference between the estimated optimal cumulative gain and the actual given cumulative gain. Additionally, a simplified gain allocation method is proposed that avoids the sorting needed in the water-filling procedure. The resulting algorithm behaves initially like the proportionate normalized least mean square algorithm and as time proceeds the algorithm behaves like the normalized least mean square algorithm. This type of behavior is typically desired and results in enhanced convergence performance. We present results for the new algorithms and compare them to other standard proportionate-type normalized least mean square algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call