Abstract

We consider a minimum span channel allocation problem (MS-CAP) to overcome spectrum scarcity and facilitate the efficiency of unmanned aerial vehicle (UAV)-enabled wireless networks. Basically, the MS-CAP minimizes the difference between the maximum and minimum used frequency, i.e., the required total bandwidth, while guaranteeing the quality-of-service (QoS) requirements for each wireless link in the network. The conventional optimal minimum span channel allocation (MS-CA) scheme is based on a centralized approach, assuming that global network information is available at the central controller. In practice, however, this may not be feasible for dynamic environments like UAV-enabled wireless networks since the real-time exchange of network information and channel allocation results with dynamically moving UAVs is formidable. Hence, we propose a novel practical MS-CA algorithm based on distributed multi-agent reinforcement learning (MARL), where each agent independently learns its best strategy with its local observations. To the best of our knowledge, the proposed technique is the first work of designing a distributed MARL for the MS-CAP for multi-UAV-enabled wireless networks in the literature. Numerical results reveal that the proposed distributed MS-CA technique can efficiently save the required total bandwidth while ensuring the QoS requirements of each link, represented by the signal-to-interference plus noise ratio (SINR) threshold, even in dynamic wireless networks. It validates the applicability of the proposed distributed MS-CA framework to dynamic networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call