Abstract

We study the effect of additive noise on integro-differential neural field equations. In particular, we analyze an Amari-type model driven by a Q-Wiener process, and focus on noise-induced transitions and escape. We argue that proving a sharp Kramers’ law for neural fields poses substantial difficulties, but that one may transfer techniques from stochastic partial differential equations to establish a large deviation principle (LDP). Then we demonstrate that an efficient finite-dimensional approximation of the stochastic neural field equation can be achieved using a Galerkin method and that the resulting finite-dimensional rate function for the LDP can have a multiscale structure in certain cases. These results form the starting point for an efficient practical computation of the LDP. Our approach also provides the technical basis for further rigorous study of noise-induced transitions in neural fields based on Galerkin approximations.Mathematics Subject Classification (2000): 60F10, 60H15, 65M60, 92C20.

Highlights

  • Starting from the classical works of Wilson/Cowan [64] and Amari [1], there has been considerable interest in the analysis of spatiotemporal dynamics of mesoscale models C

  • Uan0∗d isUu0∗n. sTtahbeles.olutions are stable and we only focus on stationary solutions, it is important to remark that the techniques developed here could—in principle— be applied to traveling waves U0 + e−α(t−s)KF (Us) (x, t) = U (x − st) for s > 0

  • A general direct approach for the derivation of an large deviation principle (LDP) for infinite-dimensional stochastic evolution equations is presented in [23] and further results have been obtained for certain additional classes of stochastic partial differential equations (SPDEs) [17,18,19, 57]

Read more

Summary

Introduction

Starting from the classical works of Wilson/Cowan [64] and Amari [1], there has been considerable interest in the analysis of spatiotemporal dynamics of mesoscale models. To the best of our knowledge, there seems to be no general Kramers’ law or large deviation principle (LDP) calculation available for continuum neural field models large deviations have been of recent interest in neuroscience applications [13, 33] It is one of the main goals of this paper to provide the basic steps toward a general theory. The Amari model has a hidden energy-type structure, we have not been able to generalize the gradient-structure approach for SPDEs to the stochastic Amari model This raises doubt whether a Kramers’ escape rate calculation can be carried out, i.e., whether one may express the prefactor of the mean first-passage in the bistable case explicitly.

Amari-Type Models
Gain Function Perturbation
Large Deviations and Kramers’ Law
Gradient Structures in Infinite Dimensions
Direct Approach to an LDP
Galerkin Approximation
Approximating the LDP
10 Outlook
Arrhenius S
20. Coombes S
60. Shardlow T
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call