Abstract
Cell-free massive multiple-input multiple-output (CF-mMIMO) is an emerging beyond fifth-generation (5 G) technology that improves energy efficiency (EE) and removes cell structure limitation by using multiple access points (APs). This study investigates the EE maximization problem. Forming proper cooperation clusters is crucial when optimizing EE, and it is often done by selecting AP–user pairs with good channel quality or aligning AP cache contents with user requests. However, the result can be suboptimal if we determine the clusters based solely on either aspect. This motivates our joint design of user association and content caching. Without knowing the user content preferences in advance, two deep reinforcement learning (DRL) approaches, i.e., single-agent reinforcement learning (SARL) and multi-agent reinforcement learning (MARL), are proposed for different scenarios. The SARL approach operates in a centralized manner which has lower computational requirements on edge devices. The MARL approach requires more computation resources at the edge devices but enables parallel computing to reduce the computation time and therefore scales better than the SARL approach. The numerical analysis shows that the proposed approaches outperformed benchmark algorithms in terms of network EE in a small network. In a large network, the MARL yielded the best EE performance and its computation time was reduced significantly by parallel computing.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have