Abstract

An adaptive multiagent reinforcement learning method for solving congestion control problems on dynamic high-speed networks is presented. Traditional reactive congestion control selects a source rate in terms of the queue length restricted to a predefined threshold. However, the determination of congestion threshold and sending rate is difficult and inaccurate due to the propagation delay and the dynamic nature of the networks. A simple and robust cooperative multiagent congestion controller (CMCC), which consists of two subsystems: a long-term policy evaluator, expectation-return predictor and a short-term rate selector composed of action-value evaluator and stochastic action selector elements has been proposed to solve the problem. After receiving cooperative reinforcement signals generated by a cooperative fuzzy reward evaluator using game theory, CMCC takes the best action to regulate source flow with the features of high throughput and low packet loss rate. By means of learning procedures, CMCC can learn to take correct actions adaptively under time-varying environments. Simulation results showed that the proposed approach can promote the system utilization and decrease packet losses simultaneously.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.