AbstractThe purpose of this paper is to optimize the structure of hierarchical neural networks. In this paper, structure optimization is used to represent a neural network by the minimum number of nodes and connections, and is performed by eliminating unnecessary connections from a trained neural network by means of a genetic algorithm. We focus on a neural network specialized for image recognition problems. The flow of the proposed method is as follows. First, the Walsh–Hadamard transform is applied to images for feature extraction. Second, the neural network is trained with the extracted features based on a back‐propagation algorithm. After neural network training, unnecessary connections are eliminated from the trained neural network by means of a genetic algorithm. Finally, the neural network is retrained to recover from the degradation caused by connection elimination. In order to validate the usefulness of the proposed method, face recognition and texture classification examples are used. The experimental results indicate that a compact neural network was generated, maintaining the generalization performance by the proposed method. © 2012 Wiley Periodicals, Inc. Electron Comm Jpn, 95(3): 28–36, 2012; Published online in Wiley Online Library (wileyonlinelibrary.com). DOI 10.1002/ecj.10384
Read full abstract