Abstract

In a recent study, a modified form of the error-of-performance function for back-propagation (BP) learning in feedforward neural networks (NN) was shown to improve the learning speed. In ( ∂ E ∂ω ) the derivative term f j 1 = (1 − f j)·f rmj of the output layer activation function f j was removed and therefore the updating of the coupling strengths via the BP formula was less suppressed. Nevertheless, the linear dependence on Δ j = t j − z j remained where t j, z j are the target and actual values of the output unit j. In this letter, we show how this unwanted Δ j dependence can also be removed by defining a new error function. A linear combination of the recently studied “cross-entropy cost function,” reminicent of the sum over the quadratic errors, and our new error function, reminicent of the sum over the absolute value of the errors, significantly improves BP learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call