Abstract

The disadvantage of the generalized learning vector quantization (GLVQ) and fuzzy generalization learning vector quantization (FGLVQ) algorithms is discussed. A revised GLVQ (RGLVQ) algorithm is proposed. Because the iterative coefficients of the proposed algorithms are properly bounded, the performance of our algorithms is invariant under uniform scaling of the entire data set unlike Pal's GLVQ, and the initial learning rate is not sensitive to the number of prototypes as Karayiannis's FGLVQ. The proposed algorithms are tested and evaluated using the iRIS data set. The efficiency of the proposed algorithms is also illustrated by their use in codebook design required for image compression based on vector quantization. The training time of RGLVQ algorithm is reduced by 20% as compared with Karayiannis's FGLVQ but the performance is similar.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.