Abstract

We consider the framework of transfer-entropy-regularized Markov decision process (TERMDP) in which the weighted sum of the classical state-dependent cost and the transfer entropy from the state random process to the control input process is minimized. Although TERMDPs are generally formulated as nonconvex optimization problems, an analytical necessary optimality condition can be expressed as a finite set of nonlinear equations, based on which an iterative forward–backward computational procedure similar to the Arimoto–Blahut algorithm is developed. It is shown that every limit point of the sequence generated by the proposed algorithm is a stationary point of the TERMDP. Applications of TERMDPs are discussed in the context of networked control systems theory and nonequilibrium thermodynamics. The proposed algorithm is applied to an information-constrained maze navigation problem, whereby we study how the price of information qualitatively alters the optimal decision polices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call