Abstract

Natural gradient learning is known to resolve the plateau problem, which is the main cause of slow learning speed of neural networks. The adaptive natural gradien tlearning, which is an adaptive method of realizing the natural gradien tlearning for neural networks, has also been developed and its practical advan tage has been confirmed. In this paper, we consider the generalization propert yof the natural gradien t method. Theoretically, the standard gradient method and the natural gradien tmethod has the same minimum in the error surface, thus the generalization performance should also be the same. However, in the practical sense, it is feasible that the natural gradien tmethod gives smaller training error when the standard method stops learning in a plateau. In this case, the solutions that are practically obtained are different from each other, and their generalization performances also come to be different. Since these situations are very often in practical problems, it is necessary to compare the generalization property of the natural gradient learning method with the standard method. In this paper, we show a case that the practical generalization performance of the natural gradient learning is poorer than the standard gradient method, and try to solve the problem by including a regularization term in the natural gradient learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call