Abstract
Earlier models for the self-organization of orientation preference and orientation selectivity maps are explicitly designed to reproduce the functional structures observed in cortical tissue. They mostly use formal though biologically motivated implementations and artifical assumptions to achieve this result. In particular, orientation selective cells are usually encoded by doubling the orientation preference angle, which introduces an ad hoc 180\ifmmode^\circ\else\textdegree\fi{} symmetry to the models. This symmetry is then reflected by the emerging \ifmmode\pm\else\textpm\fi{}180\ifmmode^\circ\else\textdegree\fi{} vortices, which parallel physiological findings. In this work a linear feed-forward neural network model is presented that is not designed to reproduce orientation maps but instead is designed to parallel the anatomical architecture of the early visual pathway. The network is trained using a general Hebb-type unsupervised learning rule and uncorrelated white noise as input. Arguments will be given that on average even strong intracortical interactions have only a weak influence on the learning dynamics of the afferent weights. An approximate description of the learning dynamics of these weights is then developed which strongly reduces computational expense without predetermining the receptive field properties, as earlier approaches do. For parameter regimes, where the most stable receptive fields form within the given model network, vortex structures containing singularities and fractures are observed. In addition, for strong lateral interactions, regions of reduced orientation selectivity appear, which coincide with these singularities. Thus, the present model suggests an implicit and biologically plausible coupling mechanism for the coordinated development of orientation preference and orientation selectivity maps.
Submitted Version (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have