Abstract
This paper studies a wireless-powered sensor network, where a sensor harvests energy from a dedicated radio-frequency (RF) energy source and transmits information to an information sink using the harvested energy. Two working modes are considered; One is the frequency division multiplexing (FDM) mode in which the sensor harvests RF energy and transmits information simultaneously over orthogonal frequency bands. The other is the time division multiplexing (TDM) mode in which energy harvesting and information transmission are implemented in the same frequency band but in different time slots. The energy harvesting channel and the information transmission channel are assumed to follow the Rician and the Rayleigh distributions, respectively, and are discretized and modeled as finite-state Markov chains. We formulate the process of energy harvesting and information transmission as an infinite-horizon discounted Markov decision process. The value iteration algorithm is used to find an asymptotically optimal energy harvesting and information transmission policy to optimize the long-term throughput. In the asymptotically optimal policy of the FDM mode, the energy transmitted from the sensor in one slot is proved to be non-decreasing with the battery state of the sensor. By contrast, such monotonicity between the transmitted energy and the battery state does not exist in the asymptotically optimal policy in the TDM mode. Simulation results verify the above findings and demonstrate that the proposed method outperforms the heuristic greedy method.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have