Abstract Winner-take-all algorithms are commonly used techniques in clustering analysis. However, they have some problems ranging from clusters under utilization to the extended training time. Some solutions to these problems are addressed here. It is shown here that using the maximum-likelihood criterion instead of the Euclidean distance metric results in better clustering. The clusters are represented by a set of neuron each has a Gaussian receptive field. For these Gaussian neurons, the covariance matrices, in addition to the centers, are learned. The one-winner condition is relaxed by maximizing the likelihood function of the mixture density function of the samples. This produces larger likelihood values and more normally distributed clusters. A fast mixture likelihood clustering is provided for both batch and pattern learning modes. Convergence analysis and experimental results are also presented.