Abstract

Reconfigurable intelligent surface (RIS) is expected to enhance task offloading performance in non-line-of-sight mobile edge computing (MEC) scenarios. This paper aims at reducing the long-term energy consumption for task offloading in a MEC system assisted by double RISs while considering non-orthogonal multiple access (NOMA). Due to the non-convexity of the formulated problem, we propose an action-constrained deep reinforcement learning (DRL) framework based on twin delayed deep deterministic policy gradient (TD3) algorithm that includes inner and outer optimization processes, which greatly reduces the action space size of the DRL agent, making it easy to implement and achieve fast convergence. During the inner optimization phase, by utilizing theoretical derivations, we propose a low-complexity method to optimally derive the transmit power, the local computing frequency, and the computation resource allocation in the base station. In the outer optimization phase, building on the solution derived from the inner optimization, we further use the TD3 algorithm to jointly determine the phase shifting of RIS elements, the offloading ratio, and the transmission time for each time slot. Experiment results demonstrate that the proposed algorithm achieves rapid convergence performance. In comparison to single RIS assisted offloading, the double RISs assisted offloading scheme proposed in this paper can reduce energy consumption by 42.8% on average.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call