Abstract

Device-to-Device (D2D) communication underlaying cellular network is a capable system for advancing the spectrum’s efficiency. However, in this condition, D2D generates cross-channel and co-channel interference for cellular and other D2D users, which creates an excessive technical challenge for allocating the spectrum. Despite this, massive connectivity is another issue in the 5G and beyond networks that need to be addressed. To overcome this problem, non-orthogonal multiple access (NOMA) is integrated with the D2D groups (DGs). In this paper, our target is to maximize the sum throughput of the overall network while maintaining the signal-to-interference noise ratio (SINR) of the cellular and D2D users. To achieve the target, a discriminated spectrum distribution framework dependent on multi-agent deep reinforcement learning (MADRL), termed a deep deterministic policy gradient (DDPG) is proposed. Here, it shares the global historical states, actions, and policies using the duration of central training. Furthermore, the proximal online policy scheme (POPS) is used to decrease the computation complexity of training. It utilized the clipping substitute technique for the modification and reduction of complexity at the training stage. The simulation results demonstrated that the proposed scheme POPS attains 16.67%, 24.98%, and 59.09% higher performance than the DDPG, Deep Dueling and deep Q-network (DQN).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call