Abstract
It is essential to build a nonparametric model to estimate a probability density function p(x) in the areas of vector quantization, pattern recognition, control, and many others. A generalization of Kohonen learning, the winning-weighted competitive learning (WWCL), is presented for a better approximation of p(x) and fast learning convergence by introducing the principle of maximum information preservation into the learning. The WWCL is a promising alternative and improvement to the generalized Lloyd algorithm (GLA) which is an iterative descent algorithm with a monotonically decreasing distortion function towards a local minimum. The WWCL is an online algorithm where the codebook is designed while training data is arriving and the reduction of the distortion function is not necessarily monotonic. Experimental results show that the WWCL consistently provides better codebooks than the Kohonen learning and the GLA in distortion or convergence rate. >
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.