Abstract

Summary form only given. The LBG algorithm and Kohonen learning algorithm (KLA) both have the problem of being trapped in local minima, the empty cell in LBG and the never winning codevector in KLA. Although compared to the LBG algorithm, Kohonen learning actually relaxes the initial limitation in design, it still relies on initial conditions. We point out the principle of maximum information preservation for estimation of an unknown probability density function and unsupervised learning. Accordingly, we introduce the winning-weighted competition in the design phase of vector quantizers and the corresponding implementation methods. The winning-weighted competitive learning (WWCL) proposed in this paper consists of the competition rule with the winning-weighted distortion measure d/sub p/(X,Y)=(1+/spl lambda/(p-1/N))/spl par/X-Y/spl par//sup 2/ instead of direct Euclidean distance, the win rate update pi(t+1)=pi(t)+/spl alpha//M where the competition status ci (winner or loser) of the ith codevector is zero or one, and the codevector learning law Y/sub i/(t+1)=Yi(t)+ci/spl alpha/(t)(X-Y/sub i/(t)). It allows neurons to win with roughly equal probability, achieves the corresponding approximate distribution of synaptic vectors for an input space, and eliminates the initial condition in vector quantizer design. The performance of our algorithm is usually better than Kohonen learning and the LBG algorithm in the sense of expected distortion and/or learning speed, and the global optima were obtained. Experimental results of the proposed learning scheme are presented and compared with the LBG algorithm and Kohonen learning algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call