Abstract

Flexible adaptation to differentiated quality of service (QoS) is quite important for future 6G network with a variety of services. Mobile ad hoc networks (MANETs) are able to provide flexible communication services to users through self-configuration and rapid deployment. However, the dynamic wireless environment, the limited resources, and complex QoS requirements have presented great challenges for network routing problems. Motivated by the development of artificial intelligence, a deep reinforcement learning-based collaborative routing (DRLCR) algorithm is proposed. Both routing policy and subchannel allocation are considered jointly, aiming at minimizing the end-to-end (E2E) delay and improving the network capacity. After sufficient training by the cluster head node, the Q-network can be synchronized to each member node to select the next hop based on local observation. Moreover, we improve the performance of training by considering historical observations, which can improve the adaptability of routing policies to dynamic environments. Simulation results show that the proposed DRLCR algorithm outperforms other algorithms in terms of resource utilization and E2E delay by optimizing network load to avoid congestion. In addition, the effectiveness of the routing policy in a dynamic environment is verified.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call