This paper studies three related algorithms: the (traditional) gradient descent (GD) algorithm, the exponentiated gradient algorithm with positive and negative weights (EG/spl plusmn/ algorithm), and the exponentiated gradient algorithm with unnormalized positive and negative weights (EGU/spl plusmn/ algorithm). These algorithms have been previously analyzed using the framework in the computational learning theory community. We perform a traditional signal processing analysis in terms of the mean square error. A relationship between the learning rate and the mean squared error (MSE) of predictions is found for the family of algorithms. This is used to compare the performance of the algorithms by choosing learning rates such that they converge to the same steady-state MSE. We demonstrate that if the target weight vector is sparse, the EG/spl plusmn/ algorithm typically converges more quickly than the GD or EGU/spl plusmn/ algorithms that perform very similarly. A side effect of our analysis is a reparametrization of the algorithms that provides insights into their behavior. The general form of the results we obtain are consistent with those obtained in the mistake-bound framework. The application of the algorithms to acoustic echo cancellation is then studied, and it is shown in some circumstances that the EG/spl plusmn/ algorithm will converge faster than the other two algorithms.