Abstract

We introduce a supervised learning method for feed-forward networks that solves the credit assignment problem for error in concert with solving the error reduction problem normally associated with methods such as backpropagation. The method reverberates between forward and reverse activations of the network. Forward activation using an exemplar computes output for each node in the network using the connection weights as usual. Reverse activation using output error as input computes local error at each node using reverse weights, or responsibilities, on the reverse connections. Reverse-reverse activation (the same as forward activation with linear output functions) using reverse output error as input computes local reverse error at each node. Once local error and local reverse error have been assigned to each node, weights and responsibilities are modified using the standard delta rule and local error and local reverse error, respectively. The method relies on convergence toward an optimal set of responsibilities for reverse error distribution in concert with convergence toward an optimal set of weights, and thus avoids calculation of nonlinear terms in the usual error backpropagation method. Thus the method is free of derivative evaluations, and by allowing credit assignment to optimize simultaneously with error reduction, it promotes clustering of responsibility among the nodes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call