Event Abstract Back to Event Perception as Modeling: A neural network that extracts and models predictable elements from input. Perception is not simply the passive analysis of input data, it involves modeling the external world, making inferences about predictable events and noting when something unexpected happens. In most models of sensory processing, activity generated by external stimuli in peripheral sensors is used to drive networks that perform computations such as object recognition. We consider a different scheme in which stimulus driven activity acts as a training signal for a spontaneously active network. This 'predictor' network continually attempts to model the training data in the sense of being able to generate it spontaneously, even if the sensory input is cut off. Of course, only certain aspects of the sensory data may be predictable. To address this, an 'extractor' circuit that is guided by feedback from the predictor produces the training signal for the predictor network. The extractor pulls out from the sensory stream aspects of the data that the predictor network can reproduce. This automatically divides the input data into three categories: 1) noise, which is the part of the input stream ignored by the extractor, 2) a predictable signal that is isolated by the extractor circuit and internally reproduced by the predictor, and 3) surprising events, which are elements of the input stream generated by the extractor that do not match the output of the predictor. The predictor is a recurrent network of firing-rate model neurons with a linear readout that provides both the output of and feedback to the network. As in the work of Jaeger and Haas (Science, 2004), the learning algorithm only modifies the weights connecting the network to the output unit, but unlike their approach, we leave all feedback intact during learning. The key element is a novel learning rule we call FORCE learning that restricts errors in the output to small values throughout training. This was originally a supervised learning scheme, but in the extractor-predictor approach, the extractor circuit acts as the supervisor for the predictor network, so the scheme is unsupervised. Due to the nature of FORCE learning, the modeling network always generates a close match to the target and thus does not generate 'hallucinations' unrelated to external reality. The rate of change of the readout weights provides a measure of whether the target function can be generated autonomously by the predictor. We use this measure as a supervisory signal for the extractor, which is modified if the target signal it is extracting from the input data cannot be autonomously modeled. The combined recurrent predictor network and linear-filter extractor is successful at finding and modeling predictable structure (if it exists) in high-dimensional time series data even when polluted by extremely complex noise. This network can be viewed as a general method for extracting patterns from complicated time series or, from a systems neuroscience perspective, as a model of perception where the way to understand the world is through internal prediction, with further processing used only when sensory input fails to match expectations. Conference: Computational and systems neuroscience 2009, Salt Lake City, UT, United States, 26 Feb - 3 Mar, 2009. Presentation Type: Poster Presentation Topic: Poster Presentations Citation: (2009). Perception as Modeling: A neural network that extracts and models predictable elements from input.. Front. Syst. Neurosci. Conference Abstract: Computational and systems neuroscience 2009. doi: 10.3389/conf.neuro.06.2009.03.253 Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters. The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated. Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed. For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions. Received: 03 Feb 2009; Published Online: 03 Feb 2009. Login Required This action requires you to be registered with Frontiers and logged in. To register or login click here. Abstract Info Abstract Supplemental Data The Authors in Frontiers Google Google Scholar PubMed Related Article in Frontiers Google Scholar PubMed Abstract Close Back to top Javascript is disabled. Please enable Javascript in your browser settings in order to see all the content on this page.
Read full abstract