Several modifications of the well-known LMS algorithm have been proposed for improved operation. This work analyzes one such algorithm that corresponds to the standard LMS algorithm with an additional update term, parameterized by the scalar factor alpha where mod alpha mod <1. The analysis of convergence yields some novel behavior insofar that it leads to complex eigenvalues of the transition matrix for the mean weight vector. It is demonstrated that the algorithm becomes unstable as mod alpha mod to 1. Several computer simulation examples support the conclusion that, while the momentum LMS algorithm has smoother convergence, no significant gain in convergence speed over the conventional LMS algorithm can be expected. However, because of this smoothing effect, the MLMS algorithm may be useful in applications where error bursting is a problem. The results presented illustrate some convergence properties of a nonlinear form of the MLMS algorithm, such as that used to train a single-layer perceptron.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>