Abstract
Inverse problems in computational neuroscience comprise the determination of synaptic weight matrices or kernels for neural networks or neural fields respectively. Here, we reduce multi-dimensional inverse problems to inverse problems in lower dimensions which can be solved in an easier way or even explicitly through kernel construction. In particular, we discuss a range of embedding techniques and analyze their properties. We study the Amari equation as a particular example of a neural field theory. We obtain a solution of the full 2D or 3D problem by embedding 0D or 1D kernels into the domain of the Amari equation using a suitable path parametrization and basis transformations. Pulses are interconnected at branching points via path gluing. As instructive examples we construct logical gates, such as the persistent XOR and binary addition in neural fields. In addition, we compare results of inversion by dimensional reduction with a recently proposed global inversion scheme for neural fields based on Tikhonov–Hebbian learning. The results show that stable construction of complex distributed processes is possible via neural field dynamics. This is an important first step to study the properties of such constructions and to analyze natural or artificial realizations of neural field architectures.
Highlights
Neural field theories are continuum approximations for highdimensional neural networks (Griffith, 1963; Wilson and Cowan, 1973; Amari, 1977; Ermentrout and McLeod, 1993; Jirsa and Haken, 1996; Robinson et al, 2001; beim Graben, 2008)
We have shown that we can achieve a stable and controllable a) pulse-dynamics, b) localized state dynamics and c) distributed Hilbert space-based logical dynamics in a neural field environment. This provides a basis for the further study of artificial or natural cognitive dynamics based on continuous connectionist structures
This is essentially a continuous version of classical connectionist feed-forward architectures
Summary
Neural field theories are continuum approximations for highdimensional neural networks (Griffith, 1963; Wilson and Cowan, 1973; Amari, 1977; Ermentrout and McLeod, 1993; Jirsa and Haken, 1996; Robinson et al, 2001; beim Graben, 2008).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.