Abstract
Natural gradient learning algorithm, which originated from information geometry, is known to provide a good solution for the problem of slow learning speed of gradient descent learning methods. Whereas the natural gradient learning algorithm is inspired from the geometric structure of the space of learning systems, there have been other approaches to acceleration of learning by using the second order information of error surface. Although the second order methods cannot give as successful solutions as the natural gradient learning method, their results showed the usefulness of the second order information of error surface in the learning process. In this paper, we develop a method of combining these two different approaches to propose a more efficient learning algorithm. At each learning step, we calculate a search direction by means of the natural gradient. When we apply the search direction to parameter-updating process, the second order information of error surface is applied to determine an efficient learning rate. Through a simple experiment on a real world problem, we confirmed that the proposed learning algorithm show faster convergence than the pure natural gradient learning algorithm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.