Abstract

In neuroscience, it is widely believed that learning and memory are primarily based on synaptic plasticity which is a neural mechanism that modifies the strength of connections between neurons. As a counterpart in machine learning, the modification of connection strength (weight) endows artificial neural networks with a powerful learning capability to solve various problems. Independent of modification for synaptic strength, recent experimental results have revealed that a single neuron also has the ability to change its intrinsic excitability to fit the synaptic input. This mechanism is referred to as neuronal intrinsic plasticity (IP) in the literature. Computational learning rules for IP have been developed based on the hypothesis of information maximization with a stable response level. With the discovery of this novel plasticity mechanism, a series of studies has focused on how IP plays a role in biological neural systems and how they benefit the learning performance of artificial neural networks. In this review, corresponding research on synergies between IP and synaptic plasticity mechanisms is presented in both the computational modeling of biological neural systems and the applications of artificial neural networks, and this combination in artificial learning systems is defined as synergistic learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call