Published in last 50 years
Articles published on Dynamic Resource Allocation
- New
- Research Article
- 10.1186/s13031-025-00723-8
- Nov 3, 2025
- Conflict and health
- Felix Amberg + 10 more
Access to healthcare in Burkina Faso, particularly obstetric care, is severely reduced by nearby armed conflict. However, systematic assessments of the spatial distribution of facility-based delivery rates and their evolution, as well as how they relate to conflict intensity, facility characteristics, and geo-spatial determinants, are absent. We analysed spatial and temporal shifts in facility-based deliveries in Burkina Faso's primary healthcare centers in relation to conflict. Using spatial analyses, we examined how conflict-related deaths influenced delivery patterns, considering variations by facility type and pre-conflict service volume. We obtained monthly healthcare facility data (2016-2021) from Burkina Faso's Health Management Information System (HMIS) and conflict event data from the Uppsala Conflict Data Program (UCDP). The study covered all primary healthcare centers in conflict-affected northern and eastern districts (> 10 UCDP conflict deaths, 2016-2021) and neighboring southern districts: 854 CSPSs (Centres de Santé et de Promotion Sociale) and 53 CMs (Centres Médicaux). The study identified geographical variations in facility-based delivery rates, notably from 2018-2019, with spatial clusters of lower rates becoming predominant in northern areas and higher rates doubling along southern routes and cities. This shift coincided spatially and timely with conflict escalation. In conflict hotspots, the average monthly rate of facility-based deliveries decreased over time, irrespective of pre-conflict service volume. However, CM facilities showed an upward trend, contrasting CSPS facilities. Outside conflict hotspots, facilities with exceptional pre-conflict service volume showed similar upward trends, while low- and high-volume facilities showed moderate increases. CM consistently maintained higher facility-based delivery rates over time than CSPS facilities. This research provides crucial insights for strengthening Burkina Faso's health system resilience to conflict by spatially identifying how facility characteristics and geo-spatial determinants shape healthcare disruptions. Mapping these elements at a fine scale enables adaptive policy interventions and dynamic resource allocation based on evolving conflict dynamics, enhancing obstetric care during conflict. By integrating the applied geospatial methods into national health systems, we can enhance responsiveness, enabling targeted and timely interventions, as well as efficient and flexible resource distribution (e.g., funding, personnel, and medical supplies). This also supports improved healthcare demand forecasting, ultimately ensuring a more proactive, data-driven, and conflict-sensitive approach to maternal health policy planning in crisis response.
- New
- Research Article
- 10.3390/fi17110502
- Nov 3, 2025
- Future Internet
- Ionuț Murarețu + 3 more
This paper introduces a novel framework that integrates reinforcement learning with declarative modeling and mathematical optimization for dynamic resource allocation during mass casualty incidents. Our approach leverages Mesa as an agent-based modeling library to develop a flexible and scalable simulation environment as a decision support system for emergency response. This paper addresses the challenge of efficiently allocating casualties to hospitals by combining mixed-integer linear and constraint programming while enabling a central decision-making component to adapt allocation strategies based on experience. The two-layer architecture ensures that casualty-to-hospital assignments satisfy geographical and medical constraints while optimizing resource usage. The reinforcement learning component receives feedback through agent-based simulation outcomes, using survival rates as the reward signal to guide future allocation decisions. Our experimental evaluation, using simulated emergency scenarios, shows a significant improvement in survival rates compared to traditional optimization approaches. The results indicate that the hybrid approach successfully combines the robustness of declarative modeling and the adaptability required for smart decision making in complex and dynamic emergency scenarios.
- New
- Research Article
- 10.1016/j.trc.2025.105352
- Nov 1, 2025
- Transportation Research Part C: Emerging Technologies
- Yiyang Wang + 3 more
Dynamic cybersecurity resource allocation in connected and automated vehicles
- New
- Research Article
- 10.1109/tmc.2025.3581510
- Nov 1, 2025
- IEEE Transactions on Mobile Computing
- Weibei Fan + 6 more
Dynamic Topology and Resource Allocation for Distributed Training in Mobile Edge Computing
- New
- Research Article
- 10.62762/tssr.2025.527003
- Oct 31, 2025
- ICCK Transactions on Systems Safety and Reliability
- Yuchang Mo + 8 more
Large-scale computing systems, such as cloud data centers, grid infrastructures, and high-performance computing clusters, are the backbone of modern information technology ecosystems. These systems typically consist of numerous heterogeneous, multi-state computing nodes that exhibit varying performance levels due to component failures, degradation, or dynamic resource allocation. Performability analysis, which integrates both system reliability and performance evaluations to quantify the probability of the system operating at a specified performance level, is critical for ensuring the efficient, reliable, and cost-effective operation of these complex systems. This paper presents a comprehensive review of recent advancements in performability analysis for large-scale multi-state computing systems over the past decade. It classifies existing research into three core methodological categories: binary decision diagram (BDD)-based approaches, multi-valued decision diagram (MDD)-based approaches, and comparative benchmarking with traditional methods (e.g., continuous-time Markov chains (CTMC), universal generating function (UGF)). For each category, the paper details key methodologies, algorithmic innovations, and practical applications. Additionally, the promising future directions are proposed to address emerging challenges, such as handling dynamic system behaviors, integrating real-time data, and optimizing resource allocation for performability. This review provides a valuable reference for researchers, system designers, and operators seeking to enhance the performability of large-scale computing systems and mitigate risks associated with service level agreement (SLA) violations.
- New
- Research Article
- 10.5121/ijcsit.2025.17502
- Oct 28, 2025
- International Journal of Computer Science and Information Technology
- Akheel Mohammed + 5 more
The doubling in mobile devices and services has introduced unprecedented challenges for next-generation wireless and mobile networks, especially as the industry moves toward 5G and 6G architectures. Conventional, rule-based network management paradigms fail to tackle challenges like scalability, latency, spectrum and energy efficiency, and dynamic resource allocation in today's complicated, heterogeneous environments. Artificial Intelligence (AI) is transforming this landscape by providing adaptive, data-driven solutions at every layer of the network. With machine learning, deep learning, and reinforcement learning, AI allows for traffic forecasting, real-time resource utilization optimization, mobility expectation, anomaly detection, and energy efficiency. These technologies, such as AI deployment at the edge and core, support self-organizing networks, low-latency response, and improved Quality of Service (QoS) and user experience. Key advantages are enhanced throughput, lower latency, and better spectral usage, particularly with deep reinforcement and federated learning techniques. However, challenges remain involving explainable AI, real-time edge processing constraints, data availability, and integration with existing infrastructure. The article proposes a research agenda focused on developing standardized frameworks, enabling cross-layer integration, and hybridizing AI with classical methods. By examining both current achievements and future directions, this work illuminates AI’s critical role in making wireless networks more autonomous, efficient, and user-centric.
- New
- Research Article
- 10.34257/gjcstbvol25is1pg27
- Oct 27, 2025
- Global Journal of Computer Science and Technology
- Maheshkumar Mayilsamy
This article explores the integration of Reinforcement Learning (RL) with stream processing systems to address the fundamental challenges of handling unpredictable workloads and dynamic resource constraints. Traditional stream processing frameworks rely on static configurations that struggle to adapt to fluctuating conditions, leading to either resource overprovisioning or performance degradation. The article presents RL as a promising solution through intelligent agents that continuously learn from system performance to optimize crucial parameters, including task scheduling, resource allocation, checkpoint frequency, and load balancing. It examines the critical importance of adaptivity in stream processing, outlines RL fundamentals applicable to this domain, and details specific applications including dynamic resource allocation, task scheduling optimization, adaptive checkpointing, and intelligent load balancing. Additionally, it addresses implementation challenges such as training overhead, reward function design, cold start problems, and integration with existing frameworks. Current tools and frameworks enabling RL-enhanced stream processing are evaluated, and future research directions, including multi-agent RL, federated reinforcement learning, explainable RL for operations, and green computing optimization, are discussed.
- New
- Research Article
- 10.25130/tjps.v30i5.1850
- Oct 25, 2025
- Tikrit Journal of Pure Science
- Alaa Abdul Ridha Abdulqader Karkhi
Domain Name System (DNS) serves as a vital Internet component, which converts friendly domain names into their corresponding computer language IP addresses. Network service availability suffers from several cyber threats in DNS systems because Distributed Denial of Service (DDoS) attacks, spoofing, and cache poisoning expose data to unauthorized access and reduce service availability. The research examines virtualization technology, which serves as a DNS security enhancement solution to increase system resilience capacity. This work implements DNS security enhancements through virtualization elements that include threat isolation with service segmentation as well as automated recovery services with dynamic resource allocation to protect DNS systems against vulnerabilities. The framework demonstrated improvements through real-world deployment with case studies and simulations because it provided 98% improved service accessibility during DDoS attacks and decreased disaster recovery time by 60% at the same time as decreasing operational costs by 30%. The study displays extensive proof demonstrating that virtualization functions as a fundamental delivery method for fault tolerance as well as enables superior protection against preventing complex security threats and scalability features. The research findings demonstrate that DNS component protection together with fast disaster recovery capability receives vital security features from virtualization implementation. Security-conscious organizations plagued by evolving threats should adopt virtualization-based DNS service maintenance because it offers scalable and price-efficient delivery capabilities. Virtualization in DNS demonstrates itself as a strategic forward-thinking approach to create sustainable yet flexible protected online structures.
- New
- Research Article
- 10.5296/jbls.v17i1.23249
- Oct 22, 2025
- Journal of Biology and Life Science
- Zhen Wang
To investigate the role of an Internet-Plus-Based smart emergency platform in reducing pre-hospital Medical Priority Dispatch System (MPDS) response time and optimizing dynamic allocation of emergency resources, providing evidence-based support for enhancing emergency care efficiency, a mixed approach was employed including: (1) Quantitative analysis: Comparing response time and dispatch efficiency data (n=12,358 cases) from six months before and after the launch of an Internet-based emergency platform in a city (July 2024–June 2025); (2) Qualitative research: Conducting semi-structured in-depth interviews with 30 pre-hospital emergency nurses and 10 dispatchers, using Colaizzi's phenomenological analysis method to extract themes and analyze platform application pain points and improvement directions. The study showed that following platform implementation: Average dispatch response time (from call receipt to vehicle dispatch) decreased from (92.5 ± 15.8) seconds to (38.2 ± 9.4) seconds (t=15.324, P<0.001); and average dispatch response time (from assignment to departure) decreased from (135.6 ± 20.1) seconds to (98.7 ± 14.5) seconds (t=8.912, P<0.001) dramatically in respective. Meanwhile, resource allocation: The proportion of cross-regional collaborative ambulance assignments increased from 15.7% to 28.9% (χ²=210.5, P<0.001). The “resource misallocation rate” (e.g., dispatching non-critical cases to critical care units) based on platform AI triage decreased from 12.5% to 5.8% (χ²=95.7, P<0.001). Besides, nurse satisfaction with “intelligent triage guidance” and “dynamic route planning” functions reached 92%. In conclusion, in the era of Internet Plus and AI, the smart emergency platform integrates data and enables intelligent decision-making, significantly optimizing pre-hospital response workflows and resource allocation efficiency. This represents a core implementation pathway for “Internet Plus Emergency Nursing” and holds significant implications for improving pre-hospital emergency response time. Therefore, future efforts should be focused on balancing technological empowerment with humanistic care while strengthening nurses' information literacy training.
- New
- Research Article
- 10.54254/2754-1169/2025.ld28268
- Oct 22, 2025
- Advances in Economics, Management and Political Sciences
- Yifan Hu
With the rise of the platform economy, local life service platforms are playing an increasingly crucial role as the infrastructure to link urban residents to localized consumption everywhere. Current studies center primarily on the platform business models and the technology trajectory of platform firms. Nevertheless, cross-national comparisons controlling for how the institutional environment conditions the operational logic of platforms remain rather underdeveloped. By applying the comparative case study research method, this paper selects Meituan in China and Uber Eats in the United States as comparative objects, conducting a systematic study of the institutional adaptation models of both in the facets of merchant collaboration, labor management, and user operation. This study documents the platform governance-oriented and market rules-dominated operational models adopted by China and the United States, respectively, and this difference is shaped by the institutional nesting of data governance, labor law systems, and regulatory logics in each jurisdiction. Uber Eats is an example of dynamic and distributed resource allocation mechanisms, while Meituan demonstrates a stronger capacity for collaborative integration. Therefore, this paper makes the case that the institutional dimension not only influences how platforms are embedded locally, but it also emerges as a crucial factor in projects involving the expansion of platforms globally.
- New
- Research Article
- 10.7494/csci.2025.26.3.6600
- Oct 18, 2025
- Computer Science
- Daisy Sharmah + 1 more
Cloud workloads can overwhelm load balancers, leading to inefficiencies and performance issues. To address these challenges, the Honey Bee Load Balancing algorithm is highly effective in enhancing cloud resource allocation. Inspired by the foraging behavior of honey bees, this algorithm offers a dynamic approach to resource distribution, adapting to changing workloads in real-time. This paper delves into the key features and advantages of Honey Bee Load Balancing, focusing on its dynamic resource allocation, overall response time, and data center processing time. Through a comparative study of existing methodologies, we propose a modified Honey Bee Load Balancing algorithm that incorporates the random selection of virtual machines. Utilizing the CloudAnalyst tool for simulation, we compare traditional and proposed Honey Bee Load Balancing algorithms to evaluate overall response time and data center processing time. The proposed algorithm demonstrates superior performance in these parameters compared to the traditional approach.
- Research Article
- 10.3390/sym17101725
- Oct 14, 2025
- Symmetry
- Sümeye Nur Karahan
Target tracking in integrated sensing and communication (ISAC) systems faces critical challenges due to complex interference patterns and dynamic resource allocation between radar sensing and wireless communication functions. Classical tracking algorithms struggle with the non-Gaussian noise characteristics inherent in ISAC environments. This paper addresses these limitations through a novel hybrid ISAC-LSTM architecture that enhances Extended Kalman Filter performance using intelligent machine learning corrections. The approach processes comprehensive feature vectors including baseline EKF states, ISAC-specific interference indicators, and innovation-based statistical occlusion detection. ISAC systems exhibit fundamental symmetry through dual sensing–communication operations sharing identical spectral and hardware resources, requiring balanced resource allocation, where αsensing+αcomm=1. The proposed hybrid architecture preserves this functional symmetry while achieving balanced performance across symmetric dual evaluation scenarios (normal and extreme conditions). Comprehensive evaluation across three realistic deployment scenarios demonstrates substantial performance improvements, achieving 21–24% RMSE reductions over classical methods (3.5–3.6 m vs. 4.6 m) with statistical significance confirmed through paired t-tests and cross-validation. The hybrid system incorporates fail-safe mechanisms ensuring reliable operation when machine learning components encounter errors, addressing critical deployment concerns for practical ISAC applications.
- Research Article
- 10.3390/healthcare13202577
- Oct 14, 2025
- Healthcare
- Abdulaziz Ahmed + 6 more
Simple SummaryEmergency departments often have long wait times, but we do not fully understand all the factors that contribute to ED overcrowding. This study looked at how weather, football games, holidays, and hospital operations affect ED waiting times at a major medical center over four years. We found that bad weather, especially thunderstorms, leads to more people waiting in the ED. Surprisingly, clear weather also increased wait times. Football games caused more crowding 12 h before game time, likely because of pre-game injuries and celebrations. Weekends and federal holidays had fewer people waiting, probably because people delay non-urgent visits when regular doctors are not available. The number of patients stuck in the ED waiting for hospital beds (boarding) and overall hospital fullness showed complex patterns. When measured at different time points, their effects changed from increasing to decreasing wait times, showing that timing matters in understanding ED crowding. These findings can help hospitals better predict busy periods and adjust staffing. For example, hospitals could add extra staff before thunderstorms or football games. Understanding these patterns helps hospitals prepare for crowding before it happens, potentially reducing your wait time when you need emergency care.Objectives: This study analyzes factors influencing Emergency Department (ED) overcrowding by examining the impacts of operational, environmental, and external variables, including weather conditions and football games. Materials and Methods: This retrospective observational study analyzed emergency department (ED) tracking and hospital census data from a southeastern U.S. academic medical center covering 2019–2023. These data were merged with corresponding weather, football event, and federal holiday data. The dependent variable was the hourly waiting count in the ED, our operational measure of overcrowding. Seven regression models were developed to assess different predictors across various timestamps. Results: Weather conditions were significantly correlated with increased ED waiting count in the Baseline Model, while federal holidays and weekends were consistently correlated with reduced waiting counts. Boarding count was positively correlated with ED waiting count when concurrent, but boarding counts 3 h and 6 h before showed significant negative correlations. Hospital census showed a negative correlation in the Baseline Model but shifted to a positive effect in other models, reflecting its time-dependent influence on ED operations. Football games 12 h before significantly correlated with increased waiting counts, while games 12 and 24 h after had no significant effects. Discussion: While existing research typically focuses on limited variables and narrow timeframes, the temporal relationships between operational and non-operational factors affecting ED overcrowding remain understudied, particularly the delayed impacts of external events and environmental conditions. Conclusions: This study emphasizes the importance of incorporating both operational and non-operational factors to understand ED patient flow. Identifying robust predictors such as weather conditions, federal holidays, boarding count, and hospital census can inform dynamic resource allocation strategies to mitigate ED overcrowding effectively.
- Research Article
- 10.25195/ijci.v51i2.610
- Oct 10, 2025
- Iraqi Journal for Computers and Informatics
- Basman Saman + 1 more
The trends of resource consumption and optimization mechanisms for blockchain-enabled security in edge-fog computing environments. While blockchain provides robust security for fog networks in a decentralized fashion, its demand for resources creates tremendous challenge in resource-constrained settings. Through in-depth examination of a Practical Byzantine Fault Tolerance PBFT-based blockchain deployment across 50 edge devices and 10 fog nodes. The study reveals the most critical resource bottlenecks and proposes an adaptive resource management framework that maximizes the tradeoff between security requirements and operational efficiency dynamically. The proposed work shows that data-type-based optimization and intelligent workload distribution can reduce CPU utilization by 27%, memory by 22%, and network bandwidth by 38% without sacrificing security assurance. The introduction of a novel dynamic resource allocation algorithm that adjusts consensus participation and cryptographic strength to current system conditions, demonstrating that security-performance trade-offs can be optimally resolved through context-sensitive optimization. These advancements are a move towards resource-constrained security architectures for edge-fog computing, enabling the broader applicability of blockchain security in resource-poor IoT environments.
- Research Article
- 10.1088/2631-8695/ae0f76
- Oct 3, 2025
- Engineering Research Express
- Zhen Zhao + 3 more
Abstract As a key component of modern industrial equipment, bearings are prone to various surface defects during the manufacturing process. Based on the YOLOv8 architecture, this study has developed a new single-stage object detection model, GCEI-YOLO. By adopting the lightweight feature extraction network of GhostConv-C2flight, the redundancy in the computing process and the frequency of memory access have been effectively reduced. The dynamic grouping strategy and multi-scale branch processing were introduced to obtain the efficient channel attention mechanism of EGCS-EMA, enhancing the important channel feature information. By adopting the Shape-IoU loss function and a reasonable gradient gain allocation strategy, the model pays more attention to ordinary quality samples. The resulting GCEI-YOLO model has 2.46M parameters and 7.4GFLOPS of computational cost, balancing compactness and performance. Compared with the benchmark model, the improvement rates of mAP50, MAP50:95 and Precision in GCEI-YOLO were 2.98%, 4.43% and 1.25% respectively, and the number of parameters and the amount of calculation were reduced by 18.27% and 9.75% respectively. The ablation experiments show that GCEI-YOLO achieves synergy through hierarchical division of labor, dynamic resource allocation and explicit training adaptation, and is more suitable for bearing surface defect detection tasks and embedded platform deployment and inference.
- Research Article
- 10.1038/s41598-025-17565-2
- Oct 2, 2025
- Scientific Reports
- Yue Yang + 12 more
As the first Asian Games held after the COVID-19 pandemic, this study evaluates the dual challenges of managing routine sports injuries and illnesses alongside pandemic-specific protocols, including intelligent disease surveillance and hybrid medical systems. The transition from closed-loop to open events necessitated a novel approach to medical service planning, integrating real-time health monitoring and adaptive public health strategies. This study aimed to evaluate the effectiveness of medical services at the 19th Asian Games by analyzing injury/illness rates and emergency response efficiency. It also assessed the impact of post-pandemic protocols, such as hybrid healthcare systems and self-rehabilitation strategies for mild COVID-19 cases, on care delivery. The findings offer actionable insights to optimize medical resource allocation in future multisport events, ensuring preparedness for both routine healthcare demands and pandemic-related challenges. Medical services were provided for 33 days to all Games stakeholders. The Games introduced an integrated medical information system—Emergency Medical Support System (EMSS) and Asian Games Information System—Medical Department (AGIS-MED)—to enable real-time data sharing, standardized injury classification, and efficient emergency transfers. This cross-sectional study analyzed illness and injury patterns, medication usage, and the efficiency of medical response protocols during the 19th Asian Games (September 23–October 8, 2023). A total of 11,658 medical encounters were recorded, including 2368 injuries and 9290 illnesses. Among the 1870 athlete cases, 40.7% were injuries and 59.2% were illnesses, with an overall injury rate of 6.44 per 100 registered athletes. Contact sports such as wrestling, basketball, boxing, hockey, and athletics exhibited the highest injury rates. Emergency transfers were required in 349 cases, and 54 patients were hospitalized. Despite relaxed COVID-19 testing mandates, the Games reported zero outbreaks, demonstrating the effectiveness of self-rehabilitation strategies for mild cases. The integration of intelligent medical systems and dynamic medical resource allocation significantly enhanced emergency response efficiency. Our data-driven framework, including segregated hospital wards and optimized personnel distribution, reduced athlete hospitalization stays to 3 days (vs. the national average of 9.2 days), offering a replicable model for future large-scale sports events.
- Research Article
- 10.1016/j.sysarc.2025.103469
- Oct 1, 2025
- Journal of Systems Architecture
- Xiaozhu Song + 5 more
Dynamic task offloading and resource allocation for energy-harvesting end–edge–cloud computing systems
- Research Article
- 10.29304/jqcsm.2025.17.32378
- Sep 30, 2025
- Journal of Al-Qadisiyah for Computer Science and Mathematics
- Hiba Abdulrazzak Ahmed
Effective use of computational resources is a very challenging issue in cloud data centres as demands from users are very high. However, classical optimization methods are often not able to cope with changing workloads, which means they can yield to inefficient decisions. A Hybrid Optimization Algorithm based on PSO Ant Colony algorithm hybrid PSO–ACO is presented in this paper for the purpose of optimizing resource allocation efficiency in cloud environment. In this hybrid model, the heuristic search ability of ACO and exploitative nature of PSO is synergized to deliver the best heuristics to meet the demands of dynamic resource provisioning with minimum energy consumption, reduced SLA violation and improved load balancing. The results supported that the hybrid PSO–ACO algorithm achieves the highest resource efficiency with reduces execution time and SLA violations, balances load effectively and reaches optimal solutions quickly and stably and this means that the hybrid ACO-PSO approach clearly outperforms both ACO and PSO individually in all performance indicators, making it the best choice for dynamic cloud computing systems.
- Research Article
- 10.1371/journal.pone.0332858
- Sep 30, 2025
- PLOS One
- Yuhang Wang + 3 more
The high concentration of hazardous sources in chemical parks, which is prone to cause chain accidents, puts forward the demand for dynamic cooperative optimization of emergency resource scheduling. Aiming at the deficiencies of existing studies in the adaptability of dynamic multi-hazard scenarios and the quantification of resource allocation fairness, this paper constructs a three-objective mixed-integer planning model that integrates time efficiency, demand coverage and allocation fairness. Fairness is innovatively quantified as an independent optimization objective, and a standard deviation-based dynamic resource allocation balance index is proposed, which combines multi-warehouse collaborative supply and multi-resource coupling constraint mechanism to systematically solve the problem of trade-offs between timeliness, adequacy and fairness in emergency dispatching in chemical accidents. The improved NSGA-II algorithm is used to solve the Pareto front efficiently, and the search efficiency is improved by the elite reservation strategy and the congestion adaptive adjustment mechanism. In the case study, comparative experiments with the weighted method and the MOGWO algorithm demonstrate that NSGA-II performs superiorly in key metrics, exhibiting excellent convergence, diversity, and stability. Based on this, a case study is conducted using a chemical industrial park in China as an example, generating 41 sets of weights covering extreme preferences, two-objective balance, and three-objective balance. Decision-makers screen solutions based on loss tolerance thresholds and select the optimal solution using a composite score of comprehensive weighted losses. The study further reveals that improvements in demand satisfaction rates are often accompanied by significant increases in transportation time, while pursuing optimal fairness may weaken overall demand satisfaction levels. Sensitivity analysis confirms that resource demand is the key driver determining the number of feasible solutions, while fairness, as an independent optimization objective, holds irreplaceable importance in emergency scheduling decisions.
- Research Article
- 10.1038/s41598-025-18353-8
- Sep 26, 2025
- Scientific Reports
- Muhammad Shoaib + 4 more
Unmanned aerial vehicles (UAVs) used as aerial base stations (ABS) can provide economical, on-demand wireless access. This research investigates dynamic resource allocation in multi-UAV-enabled communication systems with the aim of maximizing long-term rewards. More specifically, without exchanging information with other UAVs, every UAV chooses its communicating users, power levels, and sub-channels to establish communication with a ground user. In the proposed work, the dynamic scheme-based resource allocation is investigated of communication networks made possible by many UAVs to achieve the highest possible performance level over time. Specifically, each UAV selects its connected users, battery power, and communication channel independently, without exchanging information across multiple UAVs. This allows each UAV to connect with ground users. To model the unpredictability of the environment, we present the problem of long-term allocation of system resources as a stochastic game to maximize the anticipated reward. Each UAV in this game plays the role of a learnable agent, and the system solution for resource allocation matches the actions made by the UAV. Afterward, we built a framework called reward-based multi-agent learning (RMAL), in which each agent uses learning to identify its best strategies based on local observations. RMAL is an acronym for ″reward-based multi-agent learning″. We specifically offer an agent-independent strategy where each agent decides algorithms separately but cooperates on a common Q-learning-based framework. The performance of the suggested RMAL-based resource allocation method may be enhanced by employing the right development and exploration parameters, according to the simulation findings. Secondly, the proposed RMAL algorithm provides acceptable performance over full information exchange between UAVs. Doing so achieves a satisfactory compromise between the increase in performance and the additional burden of information transmission.