Abstract

Nature has always inspired the human spirit and scientists frequently developed new methods based on observations from nature. Recent advances in imaging and sensing technology allow fascinating insights into biological neural processes. With the objective of finding new strategies to enhance the learning capabilities of neural networks, we focus on a phenomenon that is closely related to learning tasks and neural stability in biological neural networks, called homeostatic plasticity. Among the theories that have been developed to describe homeostatic plasticity, synaptic scaling has been found to be the most mature and applicable. We systematically discuss previous studies on the synaptic scaling theory and how they could be applied to artificial neural networks. Therefore, we utilize information theory to analytically evaluate how mutual information is affected by synaptic scaling. Based on these analytic findings, we propose two flavors in which synaptic scaling can be applied in the training process of simple and complex, feedforward, and recurrent neural networks. We compare our approach with state-of-the-art regularization techniques on standard benchmarks. We found that the proposed method yields the lowest error in both regression and classification tasks compared to previous regularization approaches in our experiments across a wide range of network feedforward and recurrent topologies and data sets.

Highlights

  • I N 1943, McCulloch and Pitts [1] gained insights into brain function by formalizing the neuron concept describing the activity of nerve cells (McCulloch-Pitts-Cell)

  • We evaluated the concept of synaptic scaling and its effect on the training of artificial neural networks (ANNs) by assessing: 1) the flow of mutual information throughout a network’s layers; 2) the distribution of trained weights; and 3) the accuracy of trained classifiers in an experimental setup with different topologies and data sets

  • We found that layers in networks trained with synaptic scaling show less mutual information on the input and conclude that potentially more generalizing feature representations can be trained with these networks resulting in higher classification accuracies

Read more

Summary

Introduction

I N 1943, McCulloch and Pitts [1] gained insights into brain function by formalizing the neuron concept describing the activity of nerve cells (McCulloch-Pitts-Cell). Machine learning concepts were frequently inspired by nature. Examples are Hebb’s learning rule [2] that inspired learning algorithms [3] since 1949 and receptive fields in the visual cortex that inspired the convolution concept [4], [5], improving the accuracy of neural networks and reducing the computational cost due to a significant reduction of a network’s parameters making neural networks suitable for a wide variety of hardware, including mobile devices. Manuscript received June 7, 2019; revised January 23, 2020 and July 29, 2020; accepted November 7, 2020. ST7847/P622; and in part by a Nvidia GPU Grant. (Corresponding author: Martin Hofmann.)

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call