Abstract
Two training algorithms for self-evolving neural networks are discussed for rule-based data analysis. Efficient classification is achieved with a fewer number of automatically added clusters, and application data is analyzed by interpreting the trained neural network as a fuzzy rule-based system. The learning vector quantization algorithm has been modified, acquiring the self-evolvement character in the prototype neuron layer based on sub-Bayesian decision making. The number of required prototypes representing fuzzy rules is automatically determined by the application data set. This method, compared with others, shows better classification results for data sets with high noise or overlapping classification boundaries. The classifying radial basis function networks are generalized into multiple shape basis function networks. The learning algorithm discussed is capable of adding new neurons representing self-evolving clusters of different shapes and sizes dynamically. This shows a clear reduction in number of neurons or the number of fuzzy rules generated, and the classification accuracy is increased significantly. This improvement is highly relevant in developing neural networks that are functionally equivalent to fuzzy classifiers since the transparency is strongly related to the compactness of the system.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have