Abstract

In this contribution we report about a study of a very versatile neural network algorithm known as “Self-organizing Feature Maps” and based on earlier work of Kohonen [1,2]. In its original version, the algorithm addresses a fundamental issue of brain organization, namely how topographically ordered maps of sensory information can be formed by learning. This algorithm is investigated for a large number of neurons (up to 16 K) and for an input space of dimension d⩽900. To meet the computational demands this algorithm was implemented on two parallel machines, on a self-built Transputer systolic ring and on a Connection Machine CM-2. We will present below 1. (i) a simulation based on the feature map algorithm modelling part of the synaptic organization in the “hand-region” of the somatosensory cortex, 2. (ii) a study of the influence of the dimension of the input-space on the learning process, 3. (iii) a simulation of the extended algorithm, which explicitly includes lateral interactions, and 4. (iv) a comparison of the transputer-based “coarse-grained” implementation of the model, and the “fine-grained” implementation of the same system on the Connection Machine.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call