Abstract

Neural Mass Models provide a compact description of the dynamical activity of cell populations in neocortical regions. Moreover, models of regional activity can be connected together into networks, and inferences made about the strength of connections, using M/EEG data and Bayesian inference. To date, however, Bayesian methods have been largely restricted to the Variational Laplace (VL) algorithm which assumes that the posterior distribution is Gaussian and finds model parameters that are only locally optimal. This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions. We implement AIS using proposals derived from Langevin Monte Carlo (LMC) which uses local gradient and curvature information for efficient exploration of parameter space. In terms of the estimation of Bayes factors, VL and AIS agree about which model is best but report different degrees of belief. Additionally, AIS finds better model parameters and we find evidence of non-Gaussianity in their posterior distribution.

Highlights

  • Dynamical systems models instantiated using differential equations are a mainstay of modern neuroscience and provide mathematical descriptions of neuronal activity over multiple spatial and temporal scales [1, 2]

  • This paper explores the use of Annealed Importance Sampling (AIS) to address these restrictions

  • The activity of populations of neurons in the human brain can be described using a set of differential equations known as a neural mass model

Read more

Summary

Introduction

Dynamical systems models instantiated using differential equations are a mainstay of modern neuroscience and provide mathematical descriptions of neuronal activity over multiple spatial and temporal scales [1, 2]. In imaging neuroscience a widely adopted framework, called Dynamic Causal Modelling (DCM), has been developed for fitting such models to brain imaging data using a Bayesian approach [3]. This allows inferences to be made about changes in parameters One of its core assumptions, the ‘Laplace Assumption’, is that the posterior distribution is Gaussian. This assumption is typically instantiated by finding the maximum posterior parameter vector, using numerical optimisation, and making a Taylor expansion around this value and retaining terms up to second order [7]. Each distribution is multivariate Gaussian with mean and covariance that are iteratively updated to maximise an approximation to the model evidence [6]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call