Abstract

Mobile ad hoc networks (MANETs) consist of self-configured mobile wireless nodes capable of communicating with each other without any fixed infrastructure or centralized administration using the medium radio. Wireless technology is based on standard IEEE.802.11. The IEEE 802.11 Distributed Coordination Function (DCF) MAC layer uses the Binary Exponential Backoff (BEB) algorithm to deal with wireless network collisions. BEB is considered effective in reducing the probability of collisions but at the expense of numerous network performance measures, such as throughput and packets delivery ratio, mainly in high traffic load. Deep Reinforcement Learning (DRL) is a DL technique in which an agent can achieve a goal by interacting with the environment. In this paper, using one of the DRL models, we propose Q-learning (QL) to optimize MAC protocols' performance based on the contention window (CW) in MANETs. The intelligent proposed MISQ takes into account the number of packets to be transmitted and the collisions committed by each station to select the appropriate contention window. The performance of the proposed mechanism is evaluated by using in-depth simulations. The outputs indicate that the intelligent proposal mechanism learns various MANETS environments and optimizes performance over standard MAC protocol. The performance of MISQ is evaluated in various networks with throughput, channel access delay, and packets delivery rate as performance measures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call