Abstract

This paper aims at developing a learning-based framework for MAC sleep–listen–transmit scheduling in wireless networks. The Reinforcement Learning-based paradigm is shown to work in the absence of network time-synchronization and other complex hardware features, such as carrier-sensing, thus making it suitable for low-cost transceivers for IoT and wireless sensor nodes. The framework allows wireless nodes to learn policies that can support throughput-sustainable flows while minimizing node energy expenditure and sleep-induced packet drops and delays. Each node independently learns a scheduling policy without explicit communication with other network nodes. The trade-off between packet drops and energy efficiency is analyzed, and an application-specific solution is proposed for handling the trade-off. It is shown how this model allows users to prioritize between energy efficiency and packet drops depending on specific application requirements. An analytical model is developed for understanding the underlying system dynamics, which is also validated using extensive simulation experiments. Moreover, the developed mechanism is experimented with for heterogeneous network topologies and traffic patterns

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call