Abstract

In this paper, we propose a method for optimizing the structure of a multilayer neural network based on minimizing nonlinear generalized error, which is based on the principle of minimum length of description. According to this principle, the generalized error is determined by the error in the description of the model and the error in the approximation of the data by the neural network in the nonlinear approximation. From the condition of minimizing the generalized network error, the expressions for calculating the optimal network size are given (the number of synaptic connections and the number of neurons in hidden layers). The graphic dependences of the generalized error of the network on the number of synaptic connections between the neurons with different values of input images and the fixed number of training examples and the graphic dependences of the optimal number of synaptic connections from the number of training examples with different values of the input images are constructed. The assessment of the degree of complexity of the training of the neural network is carried out on the basis of the ratio of the optimal number of synaptic connections between the neurons and the optimal number of neurons in the hidden layers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call