Abstract

We address the problem of cooperative retransmission in the media access control (MAC) layer of a distributed wireless network with spatial reuse, where there can be multiple concurrent transmissions from the source and relay nodes. We propose a novel Markov decision process (MDP) framework for adjusting the transmission powers and transmission probabilities in the source and relay nodes to achieve the highest network throughput per unit of consumed energy. We also propose distributed methods that avoid solving a centralized MDP model with a large number of states by employing model-free reinforcement learning (RL) algorithms. We show the convergence to a local solution and compute a lower bound for the performance of the proposed RL algorithms. We further empirically confirm that the proposed learning schemes are robust to collisions and are scalable with regard to the network size and can provide significant cooperative diversity while enjoying low complexity and fast convergence.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.