Sort by
Model-agnostic neural mean field with a data-driven transfer function

As one of the most complex systems known to science, modeling brain behavior and function is both fascinating and extremely difficult. Empirical data is increasingly available from ex vivo human brain organoids and surgical samples, as well as in vivo animal models, so the problem of modeling the behavior of large-scale neuronal systems is more relevant than ever. The statistical physics concept of a mean-field model offers a tractable way to bridge the gap between single-neuron and population-level descriptions of neuronal activity, by modeling the behavior of a single representative neuron and extending this to the population. However, existing neural mean-field methods typically either take the limit of small interaction sizes, or are applicable only to the specific neuron models for which they were derived. This paper derives a mean-field model by fitting a transfer function called Refractory SoftPlus, which is simple yet applicable to a broad variety of neuron types. The transfer function is fitted numerically to simulated spike time data, and is entirely agnostic to the underlying neuronal dynamics. The resulting mean-field model predicts the response of a network of randomly connected neurons to a time-varying external stimulus with a high degree of accuracy. Furthermore, it enables an accurate approximate bifurcation analysis as a function of the level of recurrent input. This model does not assume large presynaptic rates or small postsynaptic potential size, allowing mean-field models to be developed even for populations with large interaction terms.

Open Access
Relevant
Tissue-like interfacing of planar electrochemical organic neuromorphic devices

Abstract Electrochemical organic neuromorphic devices (ENODes) are rapidly developing as platforms for computing, automation, and biointerfacing. Resembling short- and long-term synaptic plasticity is a key characteristic in creating functional neuromorphic interfaces that showcase spiking activity and learning capabilities. This potentially enables ENODes to couple with biological systems, such as living neuronal cells and ultimately the brain. Before coupling ENODes with the brain, it is worth investigating the neuromorphic behavior of ENODes when they interface with electrolytes that have a consistency similar to brain tissue in mechanical properties, as this can affect the modulation of ion and neurotransmitter diffusion. Here, we present ENODEs based on different PEDOT:PSS formulations with various geometries interfacing with gel-electrolytes loaded with a neurotransmitter to mimic brain-chip interfacing. Short-term plasticity and neurotransmitter-mediated long-term plasticity have been characterized in contact with diverse gel electrolytes. We found that both the composition of the electrolyte and the PEDOT:PSS formulation used as gate and channel material play a crucial role in the diffusion and trapping of cations that ultimately modulate the conductance of the transistor channels. It was shown that paired pulse facilitation can be achieved in both devices, while long-term plasticity can be achieved with a tissue-like soft electrolyte, such as agarose gel electrolyte, on spin-coated ENODes. Our work on ENODe-gel coupling could pave the way for effective brain interfacing for computing and neuroelectronic applications.

Open Access
Relevant
Kernel heterogeneity improves sparseness of natural images representations

Abstract Both biological and artificial neural networks inherently balance their performance with their operational cost, which characterizes their computational abilities. Typically, an efficient neuromorphic neural network is one that learns representations that reduce the redundancies and dimensionality of its input. For instance, in the case of sparse coding (SC), sparse representations derived from natural images yield representations that are heterogeneous, both in their sampling of input features and in the variance of those features. Here, we focused on this notion, and sought correlations between natural images’ structure, particularly oriented features, and their corresponding sparse codes. We show that representations of input features scattered across multiple levels of variance substantially improve the sparseness and resilience of sparse codes, at the cost of reconstruction performance. This echoes the structure of the model’s input, allowing to account for the heterogeneously aleatoric structures of natural images. We demonstrate that learning kernel from natural images produces heterogeneity by balancing between approximate and dense representations, which improves all reconstruction metrics. Using a parametrized control of the kernels’ heterogeneity of a convolutional SC algorithm, we show that heterogeneity emphasizes sparseness, while homogeneity improves representation granularity. In a broader context, this encoding strategy can serve as inputs to deep convolutional neural networks. We prove that such variance-encoded sparse image datasets enhance computational efficiency, emphasizing the benefits of kernel heterogeneity to leverage naturalistic and variant input structures and possible applications to improve the throughput of neuromorphic hardware.

Open Access
Relevant
Efficient sparse spiking auto-encoder for reconstruction, denoising and classification

Auto-encoders are capable of performing input reconstruction, denoising, and classification through an encoder-decoder structure. Spiking Auto-Encoders (SAEs) can utilize asynchronous sparse spikes to improve power efficiency and processing latency on neuromorphic hardware. In our work, we propose an efficient SAE trained using only Spike-Timing-Dependant Plasticity (STDP) learning. Our auto-encoder uses the Time-To-First-Spike (TTFS) encoding scheme and needs to update all synaptic weights only once per input, promoting both training and inference efficiency due to the extreme sparsity. We showcase robust reconstruction performance on the Modified National Institute of Standards and Technology (MNIST) and Fashion-MNIST datasets with significantly fewer spikes compared to state-of-the-art SAEs by 1–3 orders of magnitude. Moreover, we achieve robust noise reduction results on the MNIST dataset. When the same noisy inputs are used for classification, accuracy degradation is reduced by 30%–80% compared to prior works. It also exhibits classification accuracies comparable to previous STDP-based classifiers, while remaining competitive with other backpropagation-based spiking classifiers that require global learning through gradients and significantly more spikes for encoding and classification of MNIST/Fashion-MNIST inputs. The presented results demonstrate a promising pathway towards building efficient sparse spiking auto-encoders with local learning, making them highly suited for hardware integration.

Open Access
Relevant
ETLP: event-based three-factor local plasticity for online learning with neuromorphic hardware

Neuromorphic perception with event-based sensors, asynchronous hardware, and spiking neurons shows promise for real-time, energy-efficient inference in embedded systems. Brain-inspired computing aims to enable adaptation to changes at the edge with online learning. However, the parallel and distributed architectures of neuromorphic hardware based on co-localized compute and memory imposes locality constraints to the on-chip learning rules. We propose the event-based three-factor local plasticity (ETLP) rule that uses the pre-synaptic spike trace, the post-synaptic membrane voltage and a third factor in the form of projected labels with no error calculation, that also serve as update triggers. ETLP is applied to visual and auditory event-based pattern recognition using feedforward and recurrent spiking neural networks. Compared to back-propagation through time, eProp and DECOLLE, ETLP achieves competitive accuracy with lower computational complexity. We also show that when using local plasticity, threshold adaptation in spiking neurons and a recurrent topology are necessary to learn spatio-temporal patterns with a rich temporal structure. Finally, we provide a proof of concept hardware implementation of ETLP on FPGA to highlight the simplicity of its computational primitives and how they can be mapped into neuromorphic hardware for online learning with real-time interaction and low energy consumption.

Open Access
Relevant