Abstract

The IPv6 over Low-power Wireless Personal Area Network (6LoWPAN) protocol stack is a promising solution to connect Wireless Sensor Networks (WSNs) with the Internet for realizing a ubiquitous network interconnection of all things. However, 6LoWPAN networks face a critical challenge to control congestion caused by the burst of data traffic from wireless sensors. Packet loss will occur when buffer overflows. This paper focuses on the loss-tolerant congestion control problem in 6LoWPAN networks, which has not been addressed in existing works. We formulate the congestion control problem as a non-cooperative Markov game framework and conceive a novel congestion control method, namely Deep reinforcement learning aided Loss-tolerant Congestion Control (DLCC), to alleviate congestion while maintaining a tolerable packet loss imposed by buffer overflow. The proposed DLCC employs Deep Reinforcement Learning (DRL) to solve the curse of state dimensionality, while packet loss constraints are handled by utilizing Lagrange multipliers to integrate the reward with loss constraints. By dynamically updating Lagrange multipliers in an online learning procedure, DLCC finds the optimal congestion control policy. Our simulation results show that DLCC maintains the packet loss rate below the tolerable threshold at the presence of congestion. In contrast to existing hybrid congestion control algorithms, the proposed DLCC algorithm is more energy-efficient and provides higher throughput, lower average delay and better fairness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call