Abstract

Evolutionary neural networks (ENN) combine the evolutionary computation and neural networks to solve many interesting problems [11]. It automatically determines the weights and the topology of neural networks simultaneously and minimizes the intervention of human experts. It is a quite attractive technology for optimization, learning, and control of intelligent systems and there have been a lot of research issues raised. Figure 1 shows the general procedure for evolutionary neural networks. At first, it is necessary to determine the representation of neural networks because there are a lot of different ways to represent neural networks as genetic encodings. In the initialization step, it creates a number of neural networks randomly. It is natural that they are not working well on the problem to be optimized. The next step is to evaluate the goodness of each neural network given problems. Based on the fitness values, it selects some neural networks from the whole population and applies genetic operations to them. It creates new offspring by exchanging genetic codes from two successful neural networks and mutating small portions of the codes. It goes to the evaluation step and repeats the procedure until satisfying stopping criterion. According to Yao’s work [11], the evolutionary neural networks can be classified into the evolution of connection weights, the evolution of architectures, and the evolution of learning rules and others. In the evolution of connection weights, it assumes that the topology of the neural network is fixed and only the weights are trained using evolutionary algorithms. In the evolution of architectures, it evolves network topology, node

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call