Fractal memory structure in the spatiotemporal learning rule
The spatiotemporal learning rule (STLR) can reproduce synaptic plasticity in the hippocampus. Analysis of the synaptic weights in the network with the STLR is challenging. Consequently, our previous research only focused on the network's outputs. However, a detailed analysis of the STLR requires focusing on the synaptic weights themselves. To address this issue, we mapped the synaptic weights to a distance space and analyzed the characteristics of the STLR. The results indicate that the synaptic weights form a fractal-like structure in Euclidean distance space. Furthermore, three analytical approaches—multi-dimensional scaling, estimating fractal dimension, and modeling with an iterated function system—demonstrate that the STLR forms a fractal structure in the synaptic weights through fractal coding. These findings contribute to clarifying the learning mechanisms in the hippocampus.
- Research Article
4
- 10.3389/fnsys.2021.624353
- Mar 29, 2021
- Frontiers in Systems Neuroscience
The spatiotemporal learning rule (STLR) proposed based on hippocampal neurophysiological experiments is essentially different from the Hebbian learning rule (HEBLR) in terms of the self-organization mechanism. The difference is the self-organization of information from the external world by firing (HEBLR) or not firing (STLR) output neurons. Here, we describe the differences of the self-organization mechanism between the two learning rules by simulating neural network models trained on relatively similar spatiotemporal context information. Comparing the weight distributions after training, the HEBLR shows a unimodal distribution near the training vector, whereas the STLR shows a multimodal distribution. We analyzed the shape of the weight distribution in response to temporal changes in contextual information and found that the HEBLR does not change the shape of the weight distribution for time-varying spatiotemporal contextual information, whereas the STLR is sensitive to slight differences in spatiotemporal contexts and produces a multimodal distribution. These results suggest a critical difference in the dynamic change of synaptic weight distributions between the HEBLR and STLR in contextual learning. They also capture the characteristics of the pattern completion in the HEBLR and the pattern discrimination in the STLR, which adequately explain the self-organization mechanism of contextual information learning.
- Book Chapter
1
- 10.1007/978-981-16-0317-4_13
- Jan 1, 2021
Google used 10 million natural images as input information and performed self-organized learning with a huge neural network with 10 billion synapses, and neurons with a receptive field resembling a cat’s image appeared in the upper layer. Hokusai drew “Great Wave” by using his memory with a fractal structure. Which do you think is “beautiful”: “Google’s cat picture” and Hokusai’s “Great Wave”? I think Hokusai’s one is beautiful. Because it is based on stunning information compression. The proposed network in this paper is composed of a one-layer artificial neural network with feedforward and feedback connections. In the feedforward connections, the spatiotemporal learning rule (STLR) Tsukada et al. (1994, 1996) has high ability in pattern separation and in the recurrent connections, Hebbian learning rule (HEB) in pattern completion. The interaction between the two rules plays an important role to self-organize the context-dependent attractor in the memory network. The context-dependent attractors depend on the balance between STLR and HEB. The structure is an important factor of memory networks to hierarchically embed a sequence of events.
- Research Article
50
- 10.1007/s00422-004-0523-1
- Feb 1, 2005
- Biological Cybernetics
The hippocampus plays an important role in the course of establishing long-term memory, i.e., to make short-term memory of spatially and temporally associated input information. In 1996 (Tsukada et al. 1996), the spatiotemporal learning rule was proposed based on differences observed in hippocampal long-term potentiation (LTP) induced by various spatiotemporal pattern stimuli. One essential point of this learning rule is that the change of synaptic weight depends on both spatial coincidence and the temporal summation of input pulses. We applied this rule to a single-layered neural network and compared its ability to separate spatiotemporal patterns with that of other rules, including the Hebbian learning rule and its extended rules. The simulated results showed that the spatiotemporal learning rule had the highest efficiency in discriminating spatiotemporal pattern sequences, while the Hebbian learning rule (including its extended rules) was sensitive to differences in spatial patterns.
- Research Article
36
- 10.1007/s11571-006-9014-5
- Feb 7, 2007
- Cognitive Neurodynamics
The spatiotemporal learning rule (STLR), proposed as a non-Hebb type by Tsukada et al. (Neural Networks 9 (1996) 1357 and Tsukada and Pan (Biol. cyberm 92 (2005) 139), 2005), consists of two distinctive factors; "cooperative plasticity without a cell spike," and "its temporal summation". On the other hand, Hebb (The organization of behavior. John Wiley, New York, 1949) proposed the idea (HEBB) that synaptic modification is strengthened only if the pre- and post-cell are activated simultaneously. We have shown, experimentally, that both STLR and HEBB coexist in single pyramidal cells of the hippocampal CA1 area. The functional differences between STLR and HEBB in dendrite (local)-soma (global) interactions in single pyramidal cells of CA1 and the possibility of pattern separation, pattern completion and reinforcement learning were discussed.
- Book Chapter
2
- 10.1007/11893028_9
- Jan 1, 2006
The spatio-temporal learning rule (STLR), proposed as a non-Hebb type by Tsukada et al. (1996 [1], 2005 [2]), consists of two distinctive factors; “cooperative plasticity without a postsynaptic spike,” and its temporal summa-ion. On the other hand, Hebb (1949 [3]) proposed the idea (HEBB) that synaptic modification is strengthened only if the pre- and post-synaptic elements are activated simultaneously. We have shown, experimentally, that both STLR and HEBB coexist in single pyramidal cells of the hippocampal CA1 area.The functional differences between STLR and HEBB in dendrite (local)-soma (global) interactions in single pyramidal cells of CA1 and the possibility of reinforcement learning were discussed.KeywordsSpatio-temporal learning ruleCooperative plasticityHebb learning ruleDendritic-soma interactionHippocampus
- Book Chapter
4
- 10.1007/978-981-16-0317-4_10
- Jan 1, 2021
Hebbian learning rule (HEB) with recurrent connections has the ability to stabilize memory patterns, while spatio-temporal learning rule (STLR) has high ability to discriminate temporal difference of spatial input patterns in spatio-temporal context. These learning rules are confirmed to coexist in the brain by experimental study; however, how these learning rules interact each other in memory processing is still unclear. Here, we constructed a recurrent neural network with two biological plausible learning rules (HEB and STLR), and evaluated how spatio-temporal context information is embedded in the memory by simulation. We found that spatio-temporal context patterns are embedded stably in the memory space as attractors with approximate balance of two learning rates and clustered with temporal history. These findings contribute to the understanding of the fundamental neural mechanisms of spatio-temporal context learning in the brain.
- Conference Article
9
- 10.1109/nnsp.1998.710638
- Aug 31, 1998
This paper presents a new distributed processing approach to "direct" blind equalization of single-input multi-output (SIMO) channels. Under mild conditions, it is shown that we can recover the original source signal up to its scaled and delayed version by decorrelating the equalizer (neural network) outputs in spatio-temporal domain. The "spatio-temporal anti-Hebbian" learning rule (simple, local, biologically plausible) is derived from an information-theoretic approach and is applied for spatio-temporal decorrelation. A linear feedback neural network with FIR synapses (trained by spatio-temporal anti-Hebbian learning rule) is proposed and is shown to be a good candidate for the equalizer. Computer simulation experiments confirm the validity and high performance of the proposed neural network with the associated learning algorithm.
- Research Article
1
- 10.1016/j.ics.2004.06.016
- Aug 1, 2004
- International Congress Series
A computational model of learning and memory
- Research Article
- 10.3156/jsoft.17.3
- Jan 1, 2005
- Journal of Japan Society for Fuzzy Theory and Intelligent Informatics
Learning is the mapping of outer environmental information onto synaptic weight space. Hebb proposed a learning rule based on the AND operation between the input and the output neuron, which became the foundation for future learning rules. We previously proposed a spatio-temporal learning rule based on differences observed in hippocampal long-term potentiation (LTP) induced by various spatio-temporal pattern stimuli. This rule was applied to learn spatio-temporal patterns in a single-layer network and compared its ability of separating spatio-temporal patterns with that of other rules, including the Hebbian learning rule and its extended rules. These simulated results show that the spatio-temporal learning rule has the highest efficiency in separating different spatio-temporal patterns and may thereby be responsible for temporarily storing and recalling memory.
- Book Chapter
2
- 10.1007/978-981-10-0207-6_101
- Jan 1, 2016
The memory neural network is organized as an attractor space by both bottom-up (sensory) and top-down (contextual) information. This paper presents a possible mechanism of spatiotemporal attractors in one layer neural network based on the experimental data and their theoretical models in learning and memory. The model consists of following important concepts: First, a sequence of sensory events (bottom-up information) carried by γ-wave is consolidated in the synaptic weight space by spatiotemporal learning rule (local learning rule, non Hebb type). In the process, the learning rule plays an important role for the pattern discrimination of spatiotemporal sequences [1, 2]. Second, contextual (top-down) information carried by θ-wave is also consolidated in the same space by Hebb type learning rule. Integration of two consolidated synaptic weight spaces forms an attractor with pattern completion. The attractor is defined as a spatiotemporal attractor.
- Book Chapter
- 10.1007/978-3-642-02490-0_9
- Jan 1, 2009
The following coding mechanisms in the CA3-CA1 hippocampal networks were examined. Firstly, the way in which the information of the spatio-temporal sequence of the hippocampal CA1 pyramidal cells was clarified by using the patch-clamp recording method. The input-output relations were analyzed by applying clustering index and its self-similarity (Cantor-like coding) measure of the sequences. The membrane potentials were hierarchically clustered in a self-similar manner to the input sequences. The property was found to be present at one and two time steps retrograde in the sequences. The experimental results closely matched theoretical results of Cantor coding, reported by Tsuda and Kuroda (2001). Secondly, in the consolidation process, the spatiotemporal learning rule (STLR) composed of the spatial coincidence and its time history plays an important role in mapping the Cantor-like property onto synaptic weight space. The coexistence of STLR and Cantor-like coding in single pyramidal neuron of the hippocampal CA1 area is discussed from the viewpoint of coding mechanisms of reinforcement learning.
- Book Chapter
- 10.5772/5277
- Jan 1, 2008
Interaction Between the Spatio-Temporal Learning Rule (Non Hebbian) and Hebbian in Single Cells: A Cellular Mechanism of Reinforcement Learning
- Peer Review Report
- 10.7554/elife.80680.sa2
- Oct 12, 2022
Author response: Neural learning rules for generating flexible predictions and computing the successor representation
- Peer Review Report
- 10.7554/elife.80680.sa0
- Aug 29, 2022
Editor's evaluation: Neural learning rules for generating flexible predictions and computing the successor representation
- Peer Review Report
- 10.7554/elife.80680.sa1
- Aug 29, 2022
Decision letter: Neural learning rules for generating flexible predictions and computing the successor representation
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.