Abstract

Fog radio access network (F-RAN) is a promising architecture that leverages edge computing and caching to improve devices’ latency and quality of service. However, interference, which arises when multiple devices are concurrently scheduled on the same radio resource block (RRB), limits the performance of a dense F-RAN. This paper considers a multi-cell F-RAN in which the devices of each small cell receive data from the associated fog access point (F-AP) over the same RRB(s). The F-APs transmit data to the associated devices using rate-splitting multiple access (RSMA) schemes to manage co-channel interference within the small cells efficiently. A transmit power control scheme is proposed to maximize the network’s spectral efficiency (SE) while considering the devices’ hardware impairments (HWIs). The considered transmit power control scheme is an NP-hard problem, which is highly challenging to solve using the legacy optimization approach. To address this challenge, we propose a distributed deep reinforcement learning (DRL)-based power allocation (DDPA) scheme that takes the time-varying dynamics of the network and the HWIs of devices into account. Each F-AP in the proposed framework is equipped with a DRL agent that collects signal-to-interference-plus-noise ratio and channel state information from connected devices and adapts the transmit power allocation each scheduling interval. In addition, the ensemble learning framework is exploited to further improve the proposed DDPA scheme’s performance. We use extensive simulations to demonstrate that the DDPA scheme achieves greater SE than contemporary transmit power control schemes. In particular, the proposed DDPA scheme is especially suited to scenarios with non-negligible HWIs-induced distortion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call