Abstract

Generally, desired training set is used to make a neural network learn nonlinear relations properly. The training set consists of multiple pairs of an input vector and output one. Each input vector is given to the input layer for forward calculation, and the corresponding output vector is compared with the vector yielded from the output layer. Weights between neurons are updated based on the errors using a back propagation algorithm in backward calculation. The time required for the learning process of a neural network depends on the number of total weights and that of the input-output pairs in the training set. In the proposed learning process, after the learning is progressed, e.g., 1000 iterations, input-output pairs having had worse errors are extracted from the original training set and form a new temporary set. From the next iteration, the temporary set is applied instead of the original set. This means that only pairs with worse errors are used for updating the weights until the mean value of errors decreases to a level. After the learning using the temporary set, the original set is applied again instead of the temporary set. By alternately applying the two types of sets for learning that the calculation load for convergence can be efficiently reduced. The effectiveness of the proposed method is proved by applying to an inverse kinematics problem of an industrial robot. Generally, desired training set is used to make a neural network learn nonlinear relations properly. The training set consists of multiple pairs of an input vector and output one. Each input vector is given to the input layer for forward calculation, and the corresponding output vector is compared with the vector yielded from the output layer. Weights between neurons are updated based on the errors using a back propagation algorithm in backward calculation. The time required for the learning process of a neural network depends on the number of total weights and that of the input-output pairs in the training set. In the proposed learning process, after the learning is progressed, e.g., 1000 iterations, input-output pairs having had worse errors are extracted from the original training set and form a new temporary set. From the next iteration, the temporary set is applied instead of the original set. This means that only pairs with worse errors are used for updating the weights until the mean value of errors decreases to a level. After the learning using the temporary set, the original set is applied again instead of the temporary set. By alternately applying the two types of sets for learning that the calculation load for convergence can be efficiently reduced. The effectiveness of the proposed method is proved by applying to an inverse kinematics problem of an industrial robot.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call