Abstract

In this paper, we propose distributed static and dynamic optimal policies in a random access environment, comprised of energy harvesting (EH) nodes, in order to maximize the sum throughput. In static approach, each EH node exploits an optimal constant power to transmit its packets. However in dynamic one, the EH nodes adjust their transmission powers based on their network information, leading to exploit variable transmission powers. In static algorithm, the maximization is done through modeling energy buffer of EH nodes by a two-dimensional discrete time Markov chain which includes the effect of on-line charging and limited energy buffer. However, in dynamic approach, the variable power is allotted to EH nodes through modeling the problem as a Markov decision process. We observe that dynamic approach outperforms the static one by suitable management of collisions and available energy. Simulation results confirm our analytical approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call