Abstract

AbstractUsing formal methods complemented by large-scale simulations we investigate information theoretical properties of spiking neurons trained using Hebbian and STDP learning rules. It is shown that weight space contains meta-stable states, which are points where the average weight change under the learning rule vanishes. These points may capture the random walker transiently. The dwell time in the vicinity of the meta-stable state is either quasi-infinite or very short and depends on the level of noise in the system. Moreover, important information theoretic quantities, such as the amount of information the neuron transmits are determined by the meta-stable state. While the Hebbian learning rule reliably leads to meta-stable states, the STDP rule tends to be unstable in the sense that for most choices of hyper-parameters the weights are not captured by meta-stable states, except for a restricted set of choices. It emerges that stochastic fluctuations play an important role in determining which meta-stable state the neuron takes. To understand this, we model the trajectory of the neuron through weight space as an inhomogeneous Markovian random walk, where the transition probabilities between states are determined by the statistics of the input signal.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call