Abstract

From an analytical approach of the multilayer network architecture, we deduce a polynomial-time algorithm for learning from examples. We call it JNN, for “Jacobian Neural Network”. Although this learning algorithm is a randomized algorithm, it gives a correct network with probability 1. The JNN learning algorithm is defined for a wide variety of multilayer networks, computing real output vectors, from real input vectors, through one or several hidden layers, with low assumptions on the activation functions of the hidden units. Starting from an exact learning algorithm, for a given database, we propose a regularization technique which improves the performance on applications, as can be verified on several benchmark problems. Moreover, the JNN algorithm does not require a priori statements on the network architecture, since the number of hidden units, for a one-hidden-layer network, is computed by learning. Finally, we show that a modular approach allows to learn with a reduced number of weights.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call