Abstract

Summary form only given. In the presentation major difficulties of designing neural networks are shown. It turn out that popular MLP (Multi Layer Perceptron) networks in most cases produces far from satisfactory results. Also, popular EBP (Error Back Propagation) algorithm is very slow and often is not capable to train best neural network architectures. Very powerful and fast LM (Levenberg- Marquardt) algorithm was unfortunately implemented only for MLP networks. Also, because a necessity of the inversion of the matrix, which size is proportional to number of patterns, the LM algorithm can be used only for small problems. However, the major frustration with neural networks occurs when too large neural networks are used and it is being trained with too small number of training patterns. Indeed, such networks, with excessive number of neurons, can be trained to very small errors, but these networks will respond very poorly for new patterns, which were not used for training. The most of frustrations with neural network can be eliminated when smaller, more effective, architectures are used and trained by newly developed NBN (Neuron-by-Neuron) algorithm. The methods of computational intelligence can be most successful but they have to be used with great care. It turns out that often popular training algorithms are not capable of tuning neural network to proper accuracy without losing generalization abilities. As a consequence, such system of computational intelligence may not work properly for cases which were not used in training. The comparison of different neural network architectures is given and it is shown how to develop and train close to optimal topologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call