Abstract

We derive learning rules for finding the connections between units in stochastic dynamical networks from the recorded history of a "visible'' subset of the units. We consider two models. In both of them, the visible units are binary and stochastic. In one model the "hidden'' units are continuous-valued, with sigmoidal activation functions, and in the other they are binary and stochastic like the visible ones. We derive exact learning rules for both cases. For the stochastic case, performing the exact calculation requires, in general, repeated summations over an number of configurations that grows exponentially with the size of the system and the data length, which is not feasible for large systems. We derive a mean field theory, based on a factorized ansatz for the distribution of hidden-unit states, which offers an attractive alternative for large systems. We present the results of some numerical calculations that illustrate key features of the two models and, for the stochastic case, the exact and approximate calculations.

Highlights

  • Recent interest in network identification problems has been motivated by the advent of multi-electrode neural recordings and other large-scale biological data [1, 2, 3, 4]

  • Because of the symmetric coupling matrix, their dynamics satisfies detailed balance, so their equilibrium distributions are of GibbsBoltzmann form Z−1 exp(−E), where E is a quadratic form

  • We have derived learning rules for two kinds of stochastic binary networks with hidden units. These networks differ from Boltzmann machines in that (1) the units in them are updated synchronously rather than asynchronously, and (2) the connection strengths are allowed to be asymmetric

Read more

Summary

Introduction

Recent interest in network identification problems has been motivated by the advent of multi-electrode neural recordings and other large-scale biological data [1, 2, 3, 4]. The problem has been studied in networks where the unit outputs are continuous sigmoidal functions of their inputs, for both continuous-time (asynchronous-update) and discrete-time (simultaneous-update) dynamics, extending the back-propagation algorithm used earlier for layered networks Applying either of these kinds of models to multineuron spike data is problematic. In this paper we treat models in which the recorded neurons are stochastic and binary, and there is no symmetry requirement on the connections in the network They obey a discrete-time kinetic Ising (Glauber) dynamics [6], and a value +1 represents an action potential. We examine first the deterministic case, taking the output of a hidden unit to be a sigmoidal function of its input Though it is a big simplification of a real spiking-neuron network, this kind of model can be practical for analyzing neural data.

Objective function and learning rules
Numerical results
Stochastic hidden units
Exact learning algorithm
Mean field theory for stochastic hidden units
Derivation of mean-field theory
Learning algorithm
Discussion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.