Abstract

The potential applications of deep learning to the media access control (MAC) layer of wireless local area networks (WLANs) have already been progressively acknowledged due to their novel features for future communications. Their new features challenge conventional communications theories with more sophisticated artificial intelligence-based theories. Deep reinforcement learning (DRL) is one DL technique that is motivated by the behaviorist sensibility and control philosophy, where a learner can achieve an objective by interacting with the environment. Next-generation dense WLANs like the IEEE 802.11ax high-efficiency WLAN are expected to confront ultra-dense diverse user environments and radically new applications. To satisfy the diverse requirements of such dense WLANs, it is anticipated that prospective WLANs will freely access the best channel resources with the assistance of self-scrutinized wireless channel condition inference. Channel collision handling is one of the major obstacles for future WLANs due to the increase in density of the users. Therefore, in this paper, we propose DRL as an intelligent paradigm for MAC layer resource allocation in dense WLANs. One of the DRL models, Q-learning (QL), is used to optimize the performance of channel observation-based MAC protocols in dense WLANs. An intelligent QL-based resource allocation ( ${i}$ QRA) mechanism is proposed for MAC layer channel access in dense WLANs. The performance of the proposed mechanism is evaluated through extensive simulations. Simulation results indicate that the proposed intelligent paradigm learns diverse WLAN environments and optimizes performance, compared to conventional non-intelligent MAC protocols. The performance of the proposed ${i}$ QRA mechanism is evaluated in diverse WLANs with throughput, channel access delay, and fairness as performance metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call