Abstract

Artificial neural networks (ANNs) are known as “universal approximators” and “computational models” with particular characteristics such as the ability to learn or adapt, to organize or to generalize data. Because of their automatic (self-adaptive) process and capability to learn complex, nonlinear surfaces, ANN classifiers have become a popular choice for many machine intelligence and pattern recognition applications. In this chapter, we shall present a technique for automatic design of Artificial Neural Networks (ANNs) by evolving to the optimal network configuration(s) within an architecture space (AS), which is a family of ANNs. The AS can be formed according to the problem in hand encapsulating indefinite number of network configurations. The evolutionary search technique is entirely based on multidimensional Particle Swarm Optimization (MD PSO). With a proper encoding of the network configurations and parameters into particles, MD PSO can then seek positional optimum in the error space and dimensional optimum in the AS. The optimum dimension converged at the end of a MD PSO process corresponds to a unique ANN configuration where the network parameters (connections, weights, and biases) can then be resolved from the positional optimum reached on that dimension. In addition to this, the proposed technique generates a ranked list of network configurations, from the best to the worst. This is indeed a crucial piece of information, indicating what potential configurations can be alternatives to the best one, and which configurations should not be used at all for a particular problem. In this chapter, the architecture space is defined over feed-forward, fully connected ANNs so as to use the conventional techniques such as back-propagation and some other evolutionary methods in this field. We shall then apply the evolutionary ANNs over the most challenging synthetic problems to test its optimality on evolving networks and over the benchmark problems to test its generalization capability as well as to make comparative evaluations with the several competing techniques. We shall demonstrate that MD PSO evolves to optimum or near-optimum networks in general and has a superior generalization capability. In addition, MD PSO naturally favors a low-dimension solution when it exhibits a competitive performance with a high dimension counterpart and such a native tendency eventually steers the evolution process toward the compact network configurations in the architecture space instead of more complex ones, as long as optimality prevails.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.