Abstract
An intrinsic limitation of linear, Hebbian networks is that they are capable of learning only from the linear pairwise correlations within an input stream. To explore what higher forms of structure could be learned with a nonlinear Hebbian network, we constructed a model network containing a simple form of nonlinearity and we applied it to the problem of learning to detect the disparities present in random-dot stereograms. The network consists of three layers, with nonlinear sigmoidal activation functions in the second-layer units. The nonlinearities allow the second layer to transform the pixel-based representation in the input layer into a new representation based on coupled pairs of left-right inputs. The third layer of the network then clusters patterns occurring on the second-layer outputs according to their disparity via a standard competitive learning rule. Analysis of the network dynamics shows that the second-layer units' nonlinearities interact with the Hebbian learning rule to expand the region over which pairs of left-right inputs are stable. The learning rule is neurobiologically inspired and plausible, and the model may shed light on how the nervous system learns to use coincidence detection in general.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.