Abstract

Much of interesting complex biological behaviour arises from collective properties. Important information about collective behaviour lies in the time and space structure of fluctuations around average properties, and two-point correlation functions are a fundamental tool to study these fluctuations. We give a self-contained presentation of definitions and techniques for computation of correlation functions aimed at providing students and researchers outside the field of statistical physics a practical guide to calculating correlation functions from experimental and simulation data. We discuss some properties of correlations in critical systems, and the effect of finite system size, which is particularly relevant for most biological experimental systems. Finally we apply these to the case of the dynamical transition in a simple neuronal model.

Highlights

  • Much of interesting complex biological behaviour arises from collective properties

  • There is more to collective behaviour than what is captured by the two-point correlation functions we discuss here

  • Correlation functions should be in the toolbox of any researcher attempting to understand the behaviour of complex biological systems, and it is probably fair to say that, they have been used and studied for a long time in statistical physics, they have not been exploited enough in biology

Read more

Summary

Correlation and covariance

Let x and y be two random variables and p(x, y) their joint probability density. We use . . . to represent the appropriate averages, e.g. the mean of x is x = x p(x)dx (probability distribution of x can be obtained from the joint probability, p(x) = dy p(x, y), and in this context is called marginal probability) and its variance is Varx = (x − x ). The covariance is bounded by the product of the standard deviations (Priestley, 1981, §2.12), Cov2x,y ≤ VarxVary. The variables are said to be uncorrelated if their covariance is null: Covx,y = 0 ⇐⇒ xy = x y (uncorrelated). Absence of correlation is weaker than independence: independence means that p(x, y) = p(x)p(y) (and clearly implies absence of correlation). For uncorrelated variables it holds, because of (4), that the variance of the sum is the sum of the variances, but Covx,y = 0 is equivalent to independence only when the joint probability p(x, y) is Gaussian. The covariance, or the correlation coefficient, are said to measure the degree of linear association between x and y, because it is possible to build a nonlinear dependence of x on y that yields zero covariance (see Ch. 2 of Priestley, 1981)

Fluctuating quantities as stochastic processes
Definition of space-time correlation functions
Global and static quantities
Symmetries
Some properties
Example
Correlations in Fourier space
COMPUTING CORRELATION FUNCTIONS FROM EXPERIMENTAL DATA
Estimation of time correlation functions
Two properties of the estimator
Estimation of space correlation functions
Estimation of space-time correlations
CORRELATION LENGTH AND CORRELATION TIME
Correlation time
Correlation length
CORRELATION FUNCTIONS IN THE CRITICAL REGION
Scaling
Finite-size effects
SPACE-TIME CORRELATIONS IN NEURONAL NETWORKS
The ZBPCC model
ZBPCC on the fcc lattice
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call