AQM is used to meet targets for congestion control and low delay, high throughput in TCP/IP networks. But methods developed for AQM like Random Early Detection (RED) work strictly on the control parameters pre-set on them and thus may not respond very well to changing network conditions. Thus, this paper presents a composite AQM model using DRL based on DQN, and the ability to learn independently regarding the AQM weight parameter of the queue. The fact that DRL is adaptive means the effectiveness of the proposed system is also assured. As it is observed in implementing traditional model-based approach, the stability and performance degrade when tested in different network conditions; however, DRL’s ability to learn independently enables ML to solve network congestion as it happens. Consequently, more stability is achieved, the delay is lesser, more bandwidth is used and the overall packet drop rate is low; the proposed DRL-RED model is ideal for dynamic network environments. The proposed DRL-RED model is compared with the regular RED algorithm for both low density and high-density networks. This proposed model has achieved sustained throughput of up to 49.9 Mbps with 0.949 % reduction on delay and very low PLR of 8.38043%. From the comparative analysis, it is clear that the employment of DRL with RED improves the stock of network throughput, the reduction on packet drop frequency, and the general delay in network situations. There are two main disadvantages of this approach: first, it cannot reach the heavy-traffic, the problem partly solved by model-based approach; second, the values of the congestion-control parameters cannot be reset, an issue eradicated by DRL as a result of its inherent adaptiveness. Consequently, new stability and increased performance can be achieved under various network conditions.
Read full abstract