Abstract
High-density communications in wireless sensor networks (WSNs) demand for new approaches to meet stringent energy and spectrum requirements. We turn to reinforcement learning, a prominent method in artificial intelligence, to design an energy-preserving MAC protocol, with the aim to extend the network lifetime. Our QL-MAC protocol is derived from Q-learning, which iteratively tweaks the MAC parameters through a trial-and-error process to converge to a low energy state. This has a dual benefit of 1) solving this minimization problem without the need of predetermining the system model and 2) providing a self-adaptive protocol to topological and other external changes. QL-MAC self-adjusts the WSN node duty-cycle, reducing energy consumption without detrimental effects on the other network parameters. This is achieved by adjusting the radio sleeping and active periods based on traffic predictions and transmission state of neighboring nodes. Our findings are corroborated by an extensive set of experiments carried out on off-the-shelf devices, alongside large-scale simulations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.