Abstract

In this work, we present a local intrinsic rule that we developed, dubbed IP, inspired by the Infomax rule. Like Infomax, this rule works by controlling the gain and bias of a neuron to regulate its rate of fire. We discuss the biological plausibility of the IP rule and compare it to batch normalisation. We demonstrate that the IP rule improves learning in deep networks, and provides networks with considerable robustness to increases in synaptic learning rates. We also sample the error gradients during learning and show that the IP rule substantially increases the size of the gradients over the course of learning. This suggests that the IP rule solves the vanishing gradient problem. Supplementary analysis is provided to derive the equilibrium solutions that the neuronal gain and bias converge to using our IP rule. An analysis demonstrates that the IP rule results in neuronal information potential similar to that of Infomax, when tested on a fixed input distribution. We also show that batch normalisation also improves information potential, suggesting that this may be a cause for the efficacy of batch normalisation-an open problem at the time of this writing.

Highlights

  • The study of how neural learning occurs in the biological brain has led to the development of artificial neural networks (ANNs) in computer science

  • Unlike Li and Li, who only studied the effects of IP on a very small network with one hidden layer, this paper demonstrates the computational benefits that IP confers upon deep neural networks

  • We studied the relationship between a local, intrinsic learning mechanism and a synaptic, error-based learning mechanism in ANNs

Read more

Summary

Introduction

The study of how neural learning occurs in the biological brain has led to the development of artificial neural networks (ANNs) in computer science. The resulting research has largely focused on the ability for networks to learn through altering the strength of their synapses This learning mechanism has a variety of different forms, such as Hebbian learning [1] and its variants [2], as well as error-based learning, backpropagation [3]. Intrinsic plasticity refers to the phenomenon of neurons regulating their firing rate in response to changes in the distribution of their stimuli [4]. This mechanism seems to have two primary benefits for neural networks.

Objectives
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call