Spiking neural networks (SNNs) capture some of the efficiency of biological brains for inference and learning via the dynamic, online, and event-driven processing of binary time series. Most existing learning algorithms for SNNs are based on deterministic neuronal models, such as leaky integrate-and-fire, and rely on heuristic approximations of backpropagation through time that enforces constraints such as locality. In contrast, probabilistic SNN models can be trained directly via principled online, local, and update rules that have proven to be particularly effective for resource-constrained systems. This article investigates another advantage of probabilistic SNNs, namely, their capacity to generate independent outputs when queried over the same input. It is shown that the multiple generated output samples can be used during inference to robustify decisions and to quantify uncertainty-a feature that deterministic SNN models cannot provide. Furthermore, they can be leveraged for training in order to obtain more accurate statistical estimates of the log-loss training criterion and its gradient. Specifically, this article introduces an online learning rule based on generalized expectation-maximization (GEM) that follows a three-factor form with global learning signals and is referred to as GEM-SNN. Experimental results on structured output memorization and classification on a standard neuromorphic dataset demonstrate significant improvements in terms of log-likelihood, accuracy, and calibration when increasing the number of samples used for inference and training.
Read full abstract