Abstract

The spatiotemporal learning rule (STLR) proposed based on hippocampal neurophysiological experiments is essentially different from the Hebbian learning rule (HEBLR) in terms of the self-organization mechanism. The difference is the self-organization of information from the external world by firing (HEBLR) or not firing (STLR) output neurons. Here, we describe the differences of the self-organization mechanism between the two learning rules by simulating neural network models trained on relatively similar spatiotemporal context information. Comparing the weight distributions after training, the HEBLR shows a unimodal distribution near the training vector, whereas the STLR shows a multimodal distribution. We analyzed the shape of the weight distribution in response to temporal changes in contextual information and found that the HEBLR does not change the shape of the weight distribution for time-varying spatiotemporal contextual information, whereas the STLR is sensitive to slight differences in spatiotemporal contexts and produces a multimodal distribution. These results suggest a critical difference in the dynamic change of synaptic weight distributions between the HEBLR and STLR in contextual learning. They also capture the characteristics of the pattern completion in the HEBLR and the pattern discrimination in the STLR, which adequately explain the self-organization mechanism of contextual information learning.

Highlights

  • Learning is the embedding of information from the outside world into the changes in the connections between neurons in a neural network based on the correlation of neural activity

  • The important point of spatiotemporal learning rule (STLR) is that the amount of change in wij differs depending on the classification of Iij by two different thresholds, θ1 and θ2, which enables various learning

  • To investigate the difference between the Hebbian learning rule (HEBLR) and STLR for a context input, we put the input series generated by combining spatial patterns into the network

Read more

Summary

INTRODUCTION

Learning is the embedding of information from the outside world into the changes in the connections between neurons in a neural network (changes in synaptic weights) based on the correlation of neural activity. Considering the case where the input series are dynamically input from the external environment, each input vector in the series will be similar to each other These input vectors are discriminated into different categories or integrated into one category in the information processing of memory in the brain. The STLR proposed by Tsukada et al enables learning according to the information structure of the external environment without firing the output neurons (Tsukada et al, 1996, 2007; Tsukada and Pan, 2005). This enables a significantly flexible representation of information. The clarification of the mechanism helps us to understand the use of the learning rule based on physiological experiments for the representation of information in learning and memory networks and contributes to the applications in brain-inspired artificial intelligence

Neuron Model
Network Architecture
Input Spatiotemporal Pattern
Learning Rules
STLR Algorithm
HEBLR Algorithm
SIMULATION RESULTS
Learning With Same Contextual Input Sequences
Learning With Different Contextual Input Sequences
DISCUSSIONS
Learning Algorithms
Verification Through Simulation
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call