Abstract

The Self-Organising Map (SOM) is an Artificial Neural Network (ANN) model consisting of a regular grid of processing units. A model of some multidimensional observation, e.g. a class of digital images, is associated with each unit. The map attempts to represent all the available observations using a restricted set of models. In unsupervised learning, the models become ordered on the grid so that similar models are close to each other. We review here the objective functions and learning rules related to the SOM, starting from vector coding based on a Euclidean metric and extending the theory of arbitrary metrics and to a subspace formalism, in which each SOM unit represents a subspace of the observation space. It is shown that this Adaptive-Subspace SOM (ASSOM) is able to create sets of wavelet- and Gabor-type filters when randomly displaced or moving input patterns are used as training data. No analytical functional form for these filters is thereby postulated. The same kind of adaptive system can create many other kinds of invariant visual filters, like rotation or scale-invariant filters, if there exist corresponding transformations in the training data. The ASSOM system can act as a learning feature-extraction stage for pattern recognisers, being able to adapt to arbitrary sensory environments. We then show that the invariant Gabor features can be effectively used in face recognition, whereby the sets of Gabor filter outputs are coded with the SOM and a face is represented by the histogram over the SOM units.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call