Abstract
Modeling self-organization of neural networks for unsupervised learning using Hebbian and anti-Hebbian plasticity has a long history in neuroscience. Yet derivations of single-layer networks with such local learning rules from principled optimization objectives became possible only recently, with the introduction of similarity matching objectives. What explains the success of similarity matching objectives in deriving neural networks with local learning rules? Here, using dimensionality reduction as an example, we introduce several variable substitutions that illuminate the success of similarity matching. We show that the full network objective may be optimized separately for each synapse using local learning rules in both the offline and online settings. We formalize the long-standing intuition of the rivalry between Hebbian and anti-Hebbian rules by formulating a min-max optimization problem. We introduce a novel dimensionality reduction objective using fractional matrix exponents. To illustrate the generality of our approach, we apply it to a novel formulation of dimensionality reduction combined with whitening. We confirm numerically that the networks with learning rules derived from principled objectives perform better than those with heuristic learning rules.
Highlights
The human brain generates complex behaviors via the dynamics of electrical activity in a network of ∼ 1011 neurons each making ∼ 104 synaptic connections
The variable substitution method we introduced in the previous section can be applied to other computational objectives in order to derive neural networks with local learning rules
Through transparent variable substitutions, we demonstrated why biologically plausible neural networks can be derived from similarity matching objectives, mathematically formalized the adversarial relationship between Hebbian feedforward and anti-Hebbian lateral connections using min-max optimization lending itself to a game-theoretical interpretation, and formulated dimensionality reduction tasks as optimizations of fractional powers of matrices
Summary
The human brain generates complex behaviors via the dynamics of electrical activity in a network of ∼ 1011 neurons each making ∼ 104 synaptic connections. In most existing single-layer networks, Figure 1, Hebbian and antiHebbian learning rules were postulated rather than derived from a principled objective. Until recently, all derivations of single-layer networks from principled objectives led to biologically implausible non-local learning rules, where the weight of a synapse depends on the activities of neurons other than the two the synapse connects. Eliminating neural activity variables leads to a min-max objective in terms of feedforward and lateral synaptic weight matrices. This formalizes the long-held intuition about the adversarial relationship of Hebbian and anti-Hebbian learning rules.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.