Abstract

Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests - for a given divergence measure of interest - whether the experimental data lead to the rejection of the null hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly.

Highlights

  • Neural coding examines the way that populations of neurons represent and signal information

  • Dependencies between the activity of multiple neurons are typically described by the linear correlation coefficient

  • Dependencies beyond the linear correlation coefficient, so-called higher-order correlations, are often neglected because too many experimental samples are required in order to estimate them reliably

Read more

Summary

Introduction

Neural coding examines the way that populations of neurons represent and signal information. A central topic in population coding is the impact of spike count correlations following repeated presentation of the same stimulus (noise correlations). How important are these dependencies for the information carried by the neural population response? We define higher-order correlations as a statistical moment of order greater than two that is not uniquely determined by the first- and second-order statistics. According to this definition, higher-order correlations go beyond the linear correlation and can - but not necessarily have to - involve more than two neurons. They refer to all dependencies that are not already characterized by the correlation coefficients

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.