Abstract

Wireless communication with nodes capable of harvesting energy emerges as a new technology challenge. In this paper, we investigate the problem of utilizing energy cooperation among energy-harvesting transmitters to maximize the data rate performance. We consider a general framework which can be applied to either cellular networks with base station energy cooperation through wired power grid or sensor networks with transmitting node energy cooperation through wireless power transfer. We model this energy cooperation problem as an infinite horizon Markov decision process (MDP), which can be optimally solved by the value iteration algorithm. Since the optimal value iteration algorithm has high complexity and requires non-causal information, we propose a distributed algorithm by using reinforcement learning and splitting the MDP into several small MDPs, each associated with a transmitter. Simulation results demonstrate the effectiveness of the proposed distributed energy cooperation algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call