Abstract

This paper investigates the optimal transmission strategy for remote state estimation over multiple Markovian fading channels. A smart sensor is used to obtain a local state estimate of a system, and transmits it to a remote estimator. A new transmission strategy is proposed by co-designing the channel allocation and the transmission power control. The co-designing problem is modeled as a constrained Markov decision process (CMDP) to minimize the expected average estimation error covariance subject to the energy constraint over an infinite horizon. The CMDP is then relaxed as an unconstrained Markov decision process (UMDP) using the Lagrange multiplier method. Sufficient conditions for the existence of the optimal stationary policy for the UMDP are established to obtain the optimal transmission strategy. The structure of the optimal transmission power control policy for the UMDP with discounted cost is also elucidated. Taking account of the discrete-continuous hybrid action space, a parameterized deep Q-network (P-DQN) algorithm is employed to obtain an approximate optimal policy for the UMDP. Finally, a moving vehicle example is introduced to illustrate the effectiveness of the developed methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call