Recently, a novel maximum-likelihood sequence estimation (MLSE) equalizer was reported that avoids the explicit estimation of the channel impulse response. Instead, it is based on the fact that the (noise-free) channel outputs, needed by the Viterbi algorithm, coincide with the points around which the received (noisy) samples are clustered and can thus be estimated directly with the aid of a supervised clustering method. Moreover, this is achieved in a computationally efficient manner that exploits the channel linearity and the symmetries underlying the transmitted signal constellation. The resulting computational savings over the conventional MLSE equalization scheme are significant even in the case of relatively short channels where MLSE equalization is practically applicable. It was demonstrated, via simulations, that the performance of this algorithm is close to that using a least-squares (LS) channel estimator, although its computational complexity is even lower than that of the least-mean squares (LMS)-trained MLSE equalizer. This paper investigates the relationship of the center estimation (CE) part of the proposed equalizer with the LS method. It is proved that, when using LS with the training sequence employed by CE, the two methods lead to the same solution. However, when LS is trained with random data, it outperforms CE, with the performance difference being proportional to the channel length. A modified CE method, called MCE, is thus developed, that attains the performance of LS with perfectly random data, while still being much simpler computationally than classical LS estimation. Through the results of this paper, CE is confirmed as a methodology that combines high performance, simplicity, and low computational cost, as required in a practical equalization task. An alternative, algebraic viewpoint on the CE method is also provided.