Abstract

The design of neural network models involves numerous complexities, including the determination of input vectors, choosing the number of hidden layers and their computational units, and specifying activation functions for the latter. The combinatoric possibilities are daunting, yet experience has yielded informal guidelines that can be useful. Alternatively, current research on genetic algorithms (GA) suggests that they might be of practical use as a formal method of determining ‘good’ architectures for neural networks. In this paper, we use a genetic algorithm to find effective architectures for backpropagation neural networks (BP). We compare the performance of heuristically designed BP networks with that of GA-designed BP networks. Our test domains are sets of problems having compensatory, conjunctive, and mixed-decision structures. The results of our experiment suggest that heuristic methods produce architectures that are simpler and yet perform comparatively well. © 1998 John Wiley & Sons, Ltd.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call