Abstract

Communications among user equipment (UE) play a pivotal role in the coordination and information sharing in order to accomplish the predefined collaborative tasks of UE via the internet of robotic things (IoRT). Cloud radio access network (C-RAN) emerges as one of the most compelling architectures to ensure the UE demands. However, to optimize the power usage by fulfilling UE demand over a long operational period, the radio resource allocation (RRA) in C-RAN requires to be more visionary. To solve this challenge, we propose a deep reinforcement learning (DRL) based algorithm consisting of two different value-based networks. One network generates the target value for the second network for the purpose of better convergence. Under the same UE demands, simulation results verify that the proposed DRL algorithm outperforms the Deep Q Network (DQN) and conventional approaches in terms of power consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call