In the realm of next-generation mobile communication networks, characterized by dynamic and evolving workloads, the efficient resource allocation becomes paramount for achieving optimal performance. This paper addresses the intricate challenges of multiagent resource management through the integration of stochastic learning automata and continuous-time optimization techniques, presenting an innovative solution for real-time adaptability and dynamic load balancing. The central aim is to achieve optimal values for local time-varying cost functions while taking into account resource constraints and the feasibility of local allocation. In the context of multi-agent resource allocation systems, where conditions vary over time, the algorithm exhibits a dynamic adaptability, continuously adjusting to the evolving optimal solutions tailored for each agent. Utilizing stochastic learning automata, the algorithm effectively addresses the time-varying load-balancing function in response to dynamic user demands. Furthermore, we introduced an scalable solution for mobility robustness optimization based on deep reinforcement learning (DRL-MRO). This method dynamically identifies network-wide optimal parameter values across the entire network to accommodate diverse mobility patterns, ensuring the effectiveness of load balancing parameters that seamlessly adapt to the dynamic configuration of the network. The overarching goal is to maintain a consistent quality of service level for each agent, thereby enhancing the overall system performance. Simulation outcomes unequivocally affirm the superior performance of the proposed load balancing approach, rooted in continuous-time optimization and stochastic learning automata, surpassing existing schemes across diverse system configurations.
Read full abstract