Abstract

Neural networks are often characterized as highly nonlinear systems of fairly large amount of parameters (in order of 103 - 104 ). This fact makes the optimization of parameters to be a nontrivial problem. But the astonishing moment is that the local optimization technique is widely used and yields reliable convergence in many cases. Obviously, the optimization of neu­ ral networks is high-dimensional, multi-extremal problem, so, as usual, the global optimization methods would be applied in this case. On the basis of Perceptron-like'i unit (which is the building block for the most architectures of neural networks) we analyze why the local optimization technique is so successful in the field of neural networks. The result is that a linear approximation of the neural network can be sufficient to evaluate the start point for the local optimization procedure in the nonlinear regime. This result can help in developing faster and more robust algorithms for the optimization of neural network parameters.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call