Abstract

The Learning Vector Quantization (LVQ) algorithm and its variants have been employed in some fuzzy neural networks to automatically derive membership functions from training data. Although several improvements to the LVQ algorithm have been proposed, problematic areas of the LVQ algorithm include: the selection of number of clusters, initial weights, proper training parameters, and forced termination. These problematic areas in the derivation of centroids of one-dimensional data are illustrated with an artificially generated experimental data set on LVQ, GLVQ, and FCM. A Modified Learning Vector Quantization (MLVQ) algorithm is presented in this chapter to address these problematic areas for one-dimensional data. MLVQ models the development of the nervous system in two stages: a first stage where the basic architecture and coarse connections patterns are laid out, and a second stage where the initial architecture is refined in activity-dependent ways. MLVQ determines the learning constant parameter and modifies the terminating condition of the LVQ algorithm so that convergence can be achieved and easily detected. Experiments on the MLVQ algorithm are performed and contrasted against LVQ, GLVQ, and FCM. Results show that MLVQ determines the number of clusters and converges to the centroids. Results also show that MLVQ is insensitive to the sequence of the training data, able to identify centroids of overlapping clusters, and able to ignore outliners without identifying them as separate clusters. Results using MLVQ algorithm and Gaussian membership functions with Pseudo Outer-Product Fuzzy Neural Network using Compositional Rule of Inference and Singleton fuzzifier (POPFNN-CRI(S)) on pattern classification and time series prediction are also provided to demonstrate the effectiveness of the fuzzy membership functions derived using MLVQ.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call