Abstract

Maximizing the information per the expected unit energy cost in biological neurons is vital in understanding the functionality of our nervous system and improving the bio-inspired nano-networks. Hence, toward a better comprehension of neuronal information processing and communication from an information-energy standpoint, this paper presents the following novel results on the capacity-achieving distribution: 1) For the first time, the paper presents the probability mass function of the number of spikes observed in a given time window where inter-spike intervals follow the Inverse Gaussian distribution. To extend this result, the first and second moments of the underlying probability mass function will be determined. 2) The paper proves that the relationship between the underlying probability mass function and the marginal distribution of the intensity parameter is one-to-one when the parameter is a random variable. 3) Proof that a unique capacity-achieving distribution for the stimulus intensity exists in a discrete form with a finite number of mass points that varies with the energy budget. Then, the paper presents an algorithm that numerically computes a trade-off between the maximum transmit bits and the coding-duration energy expenditure. Subsequently, the optimal multiplier along with the optimal distribution and the corresponding capacity will be obtained.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call