Abstract

Although Shannon mutual information has been widely used, its effective calculation is often difficult for many practical problems, including those in neural population coding. Asymptotic formulas based on Fisher information sometimes provide accurate approximations to the mutual information but this approach is restricted to continuous variables because the calculation of Fisher information requires derivatives with respect to the encoded variables. In this paper, we consider information-theoretic bounds and approximations of the mutual information based on Kullback-Leibler divergence and Rényi divergence. We propose several information metrics to approximate Shannon mutual information in the context of neural population coding. While our asymptotic formulas all work for discrete variables, one of them has consistent performance and high accuracy regardless of whether the encoded variables are discrete or continuous. We performed numerical simulations and confirmed that our approximation formulas were highly accurate for approximating the mutual information between the stimuli and the responses of a large neural population. These approximation formulas may potentially bring convenience to the applications of information theory to many practical and theoretical problems.

Highlights

  • Information theory is a powerful tool widely used in many disciplines, including, for example, neuroscience, machine learning, and communication technology [1,2,3,4,5,6,7]

  • We found that the three formulas, namely, Ie, Id and ID, provided excellent approximations to the true values of mutual information as evaluated by Monte Carlo method

  • Our final approximation formulas involve only Kullback-Leibler divergence, which is often easier to evaluate than Shannon mutual information in practical applications

Read more

Summary

Introduction

Information theory is a powerful tool widely used in many disciplines, including, for example, neuroscience, machine learning, and communication technology [1,2,3,4,5,6,7]. Asymptotic approximations allow us to prove that the optimal neural population distribution that maximizes the mutual information between stimulus and response can be solved by convex optimization [24]. This approach does not generalize to discrete variables since the calculation of Fisher information requires partial derivatives of the likelihood function with respect to the encoded variables. The concrete examples used in our numerical simulations were based on Poisson spike model, where the response of each neuron is taken as the spike count within a given time window While this simple Poisson model allowed us to consider a large neural population, it only captured the spike rate but not any temporal structure of the spike trains [25,26,27,28].

Notations and Definitions
Theorems
Approximations for Mutual Information
Results of Numerical Simulations
MC jmax jmax
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call