Abstract
Abstract Winner-take-all algorithms are commonly used techniques in clustering analysis. However, they have some problems ranging from clusters under utilization to the extended training time. Some solutions to these problems are addressed here. It is shown here that using the maximum-likelihood criterion instead of the Euclidean distance metric results in better clustering. The clusters are represented by a set of neuron each has a Gaussian receptive field. For these Gaussian neurons, the covariance matrices, in addition to the centers, are learned. The one-winner condition is relaxed by maximizing the likelihood function of the mixture density function of the samples. This produces larger likelihood values and more normally distributed clusters. A fast mixture likelihood clustering is provided for both batch and pattern learning modes. Convergence analysis and experimental results are also presented.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.