Abstract

The Finite Transmission Feedback Information (FTFI) capacity is characterized for any class of channel conditional distributions ${\mathbf{P}}_{B_{i}|B^{i-1}, A_{i}}$ and ${\mathbf{P}}_{B_{i}|B_{i-M}^{i-1}, A_{i}}$ , where $M$ is the memory of the channel, $B^{i} \stackrel {\triangle }{=}\{B^{-1},B_{0}, \ldots, B_{i}\}$ are the channel outputs, and $A^{i} \stackrel {\triangle }{=}\{A_{0}, A_{1}, \ldots, A_{i}\}$ , are the channel inputs, for $i=0, \ldots, n$ . The characterizations of FTFI capacity are obtained by first identifying the information structures of the optimal channel input conditional distributions ${\mathscr P}_{[0, n]} \stackrel {\triangle }{=}\big \{ {\mathbf{P}}_{A_{i}|A^{i-1}, B^{i-1}}: i=0, \ldots, n\big \}$ , which maximize directed information $C_{A^{n} \rightarrow B^{n}}^{FB} \stackrel {\triangle }{=}\sup _{\mathscr P_{[0, n]} } I(A^{n} \rightarrow B^{n}), \hspace {.2in} I(A^{n} \rightarrow B^{n}) \stackrel {\triangle }{=}\sum _{i=0}^{n} I(A^{i};B_{i}|B^{i-1})$ . The main theorem states that, for any channel with memory $M$ , the optimal channel input conditional distributions occur in the subset satisfying conditional independence $\stackrel {\circ }{\mathscr P}_{[0, n]} \stackrel {\triangle }{=}\big \{ {\mathbf{P}}_{A_{i}|A^{i-1}, B^{i-1}}= {\mathbf{P}}_{A_{i}|B_{i-M}^{i-1}}: i=0, \ldots, n\big \}$ , and the characterization of FTFI capacity is given by $C_{A^{n} \rightarrow B^{n}}^{FB, M} \stackrel {\triangle }{=}\sup _{ \stackrel {\circ }{\mathscr P}_{[0, n]} } \sum _{i=0}^{n} I(A_{i}; B_{i}|B_{i-M}^{i-1})$ . Similar conclusions are derived for problems with average cost constraints of the form $\frac {1}{n+1} {\mathbf{E}}\Big \{c_{0,n}(A^{n}, B^{n-1})\Big \} \leq \kappa,\,\, \kappa \geq 0$ , for specific functions $c_{0,n}(a^{n},b^{n-1})$ . The feedback capacity is addressed by investigating $\lim _{n \longrightarrow \infty } \frac {1}{n+1}C_{A^{n} \rightarrow B^{n}}^{FB, M} $ . The methodology utilizes stochastic optimal control theory, to identify the control process, the controlled process, and often a variational equality of directed information, to derive upper bounds on $I(A^{n} \rightarrow B^{n})$ , which are achievable over specific subsets of channel input conditional distributions ${\mathscr P}_{[0, n]}$ , which are characterized by conditional independence. The main results illustrate a direct analogy, in terms of conditional independence, of the characterizations of FTFI capacity and Shannon’s capacity formulae of memoryless channels. An example is presented to illustrate the role of optimal channel input process in the derivations of the direct and converse coding theorems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.