Abstract

The well-known backpropagation learning algorithm is probably the most popular learning algorithm in artificial neural networks. It has been widely used in various applications of deep learning. The backpropagation algorithm requires a separate feedback network to back propagate errors. This feedback network must have the same topology and connection strengths (weights) as the feed-forward network. In this article, we propose a new learning algorithm that is mathematically equivalent to the backpropagation algorithm but does not require a feedback network. The elimination of the feedback network makes the implementation of the new algorithm much simpler. The elimination of the feedback network also significantly increases biological plausibility for biological neural networks to learn using the new algorithm by means of some retrograde regulatory mechanisms that may exist in neurons. This new algorithm also eliminates the need for two-phase adaptation (feed-forward phase and feedback phase). Hence, neurons can adapt asynchronously and concurrently in a way analogous to that of biological neurons.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call