Abstract

Internet of Things (IoT) finds its applications in home, city and industrial settings. Current network is in transition to adopt fog/edge architecture for providing the capacity for IoT. However, in order to deal with the enormous amount of traffic generated by IoT devices and to reduce queuing delay, novel self-learning network management algorithms are required at fog/edge nodes. Active Queue Management (AQM) is a known intelligent packet dropping techique for differential QoS. In this paper, we propose a new AQM scheme based on Deep Reinforcement Learning (DRL) technique and introduce scaling factor in our reward function to achieve the trade-off between queuing delay and throughput. We choose Deep Q-Network (DQN) as a baseline for our scheme, and compare our approach with various AQM schemes by deploying them at the interface of fog/edge node. We simulated them by configuring different bandwidth and round trip time (RTT) values. The simulation results show that our scheme outperforms other AQM schemes in terms of delay and jitter while maintaining above-average throughput, and also verifies that AQM based on DRL is efficient in managing congestion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call