Abstract

We study the increase in per-sample differential entropy rate of random sequences and processes after being passed through a non minimum-phase (NMP) discrete-time, linear time-invariant (LTI) filter G. For LTI discrete-time filters and random processes, it has long been established by Theorem 14 in Shannon’s seminal paper that this entropy gain, , equals the integral of . In this note, we first show that Shannon’s Theorem 14 does not hold in general. Then, we prove that, when comparing the input differential entropy to that of the entire (longer) output of G, the entropy gain equals . We show that the entropy gain between equal-length input and output sequences is upper bounded by and arises if and only if there exists an output additive disturbance with finite differential entropy (no matter how small) or a random initial state. Unlike what happens with linear maps, the entropy gain in this case depends on the distribution of all the signals involved. We illustrate some of the consequences of these results by presenting their implications in three different problems. Specifically: conditions for equality in an information inequality of importance in networked control problems; extending to a much broader class of sources the existing results on the rate-distortion function for non-stationary Gaussian sources, and an observation on the capacity of auto-regressive Gaussian channels with feedback.

Highlights

  • Theorem 3 establishes that the effective differential entropy rate of the entire or complete output of an linear time-invariant (LTI) system exceeds that of the input sequence by the right-hand side (RHS)

  • We present some of the implications of these results on three different problems previously addressed in the literature, namely finding the rate-distortion function for non-stationary processes, an inequality in networked control theory, and the feedback capacity of Gaussian stationary channels

  • We have provided an intuitive explanation and a rigorous characterization of the entropy gain of a linear time-invariant (LTI) system, defined as the difference between the differential entropy rates of its output and input random signals

Read more

Summary

Introduction

(It is worth noting that (6) is the discretetime equivalent of (3) (without its wrong factor of 2), which follows directly from the correspondence between sampled band-limited continuous-time systems and discretetime systems.) It is in Reference [9], Section II-C, where, for the first time, it is shown that, for a stationary Gaussian input u1∞ , the full entropy gain predicted by (6) takes place if the system output y1∞ is contaminated by an additive output disturbance of length p and positive definite covariance matrix, where p is the order of G (z). A system with the latter property is said to be non-minimum phase (NMP); a system with all its zeros inside D is said to be minimum phase (MP) [11]

Main Contributions of this Paper
Paper Outline
Notation
Mutual Information and Differential Entropy
Proof of Theorem 2
Formalizing Shannon’s Argument
The Effective Differential Entropy
Entropy-Balanced Processes
Geometric Interpretation
Characterization of Entropy-Balanced Processes
Entropy Gain Due to External Disturbances
Input Disturbances Do Not Produce Entropy Gain
The Entropy Gain Introduced by Output Disturbances when G is MP is Zero
Proof of Theorem 4
Entropy Gain Due to a Random Initial State
Some Implications
Networked Control
Rate Distortion Function for Non-Stationary Processes
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.