Abstract

This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neutral networks; pulse neural networks, quantum neuro computation, etc, the multilayer neural network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. The aims of this paper are to suggest solutions of these problems and to reduce the total learning time. The total learning time means the total computational time required to learn certain objects including adjusting parameter values and restarting the learning from the beginning. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. Focusing on the oscillatory characteristics, it is determined whether the learning will move on to the next stage or the learning will restart from the beginning. Computational experiments suggest that the proposed method has the capability of higher learning performance and needs less learning time compared with the conventional method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call