Abstract

In this literature review, we examine several deep learning algorithms in the context of biological plausibility and, in turn, argue that a backprop-like algorithm is the most likely candidate for how learning operates in the brain. Although there are numerous difficulties in how the backpropagation algorithm might be implemented in neural circuitry, we note that slight variations of the algorithm have been found to circumvent biological constraints and that seemingly unrelated algorithms can often be theoretically related to it. In particular, we examine the literature behind feedback alignment, target propagation, and equilibrium propagation, after giving some general background on learning in biology, AI, and their intersection. Ultimately, we acknowledge that there is no true consensus as to which learning algorithm the brain actually uses, but suspect that the answer is backprop-like in nature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call