Abstract
A fundamental tenet of the theory of deterministic chaos holds that infinitesimal variation in the initial conditions of a network that is operating in the basin of a low-dimensional chaotic attractor causes the various trajectories to diverge from each other quickly. This "sensitivity to initial conditions" might seem to hold promise for signal detection, owing to an implied capacity for distinguishing small differences in patterns. However, this sensitivity is incompatible with pattern classification, because it amplifies irrelevant differences in incomplete patterns belonging to the same class, and it renders the network easily corrupted by noise. Here a theory of stochastic chaos is developed, in which aperiodic outputs with 1/f2 spectra are formed by the interaction of globally connected nodes that are individually governed by point attractors under perturbation by continuous white noise. The interaction leads to a high-dimensional global chaotic attractor that governs the entire array of nodes. An example is our spatially distributed KIII network that is derived from studies of the olfactory system, and that is stabilized by additive noise modeled on biological noise sources. Systematic parameterization of the interaction strengths corresponding to synaptic gains among nodes representing excitatory and inhibitory neuron populations enables the formation of a robust high-dimensional global chaotic attractor. Reinforcement learning from examples of patterns to be classified using habituation and association creates lower dimensional local basins, which form a global attractor landscape with one basin for each class. Thereafter, presentation of incomplete examples of a test pattern leads to confinement of the KIII network in the basin corresponding to that pattern, which constitutes many-to-one generalization. The capture after learning is expressed by a stereotypical spatial pattern of amplitude modulation of a chaotic carrier wave. Sensitivity to initial conditions is no longer an issue. Scaling of the additive noise as a parameter optimizes the classification of data sets in a manner that is comparable to stochastic resonance. The local basins constitute dynamical memories that solve difficult problems in classifying data sets that are not linearly separable. New local basins can be added quickly from very few examples without loss of existing basins. The attractor landscape enables the KIII set to provide an interface between noisy, unconstrained environments and conventional pattern classifiers. Examples given here of its robust performance include fault detection in small machine parts and the classification of spatiotemporal EEG patterns from rabbits trained to discriminate visual stimuli.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.