Abstract

First, we identify an algorithmic defect of the generalized learning vector quantization (GLVQ) scheme that causes it to behave erratically for a certain scaling of the input data. We demonstrate the problem using the IRIS data. Then, we show that GLVQ can behave incorrectly because its learning rates are reciprocally dependent on the sum of squares of distances from an input vector to the node weight vectors. Finally, we propose a new family of models -- the GLVQ-F family -- that remedies the problem. We derive algorithms for competitive learning using the GLVQ-F model, and prove that they are invariant to all positive scalings of the data. The learning rule for GLVQ-F updates all nodes using a learning rate function which is inversely proportional to their distance from the input data point. We illustrate the failure of GLVQ and success of GLVQ-F with the ubiquitous IRIS data. © (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.