Abstract

A benchmark study of two self-organizing artificial neural network models, ART2 and DIGNET, is conducted. The architecture differences and learning procedures between these two models are compared. The performance of ART2 and DIGNET on data clustering and pattern recognition problems with noise or interference is investigated by computer simulations. It is shown that DIGNET generally has faster learning and better clustering performance on the statistical pattern recognition problems. DIGNET has a simpler architecture, and the system parameters can be analytically determined from the self- organizing process. The threshold value used in DIGNET can be specifically determined from a given lower bound on the desirable signal-to-noise ratio (SNR). A modified model based on the features of ART2 and DIGNET is also derived and investigated. The simpler architecture combines the ART2 structure with the advantages of DIGNET model. The concepts of well depth and stage age originally introduced in DIGNET are applied in the modified model. The modified model preserves the features of noise suppression, contrast enhancement and self- organizing stable pattern recognition of ART2, yet provides a specific method to adjust parameters in the network. The network performs a variant of K-means learning, but without the knowledge of a priori information on the actual number of clusters. The networks discussed in this paper are applied and benchmarked against clustering and pattern recognition problems. Comparative simulation results of the networks are also presented.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call