Abstract

Both caching and interference alignment (IA) are promising techniques for next-generation wireless networks. Nevertheless, most of the existing works on cache-enabled IA wireless networks assume that the channel is invariant, which is unrealistic considering the time-varying nature of practical wireless environments. In this paper, we consider realistic time-varying channels. Specifically, the channel is formulated as a finite-state Markov channel (FSMC). The complexity of the system is very high when we consider realistic FSMC models. Therefore, in this paper, we propose a novel deep reinforcement learning approach, which is an advanced reinforcement learning algorithm that uses a deep $Q$ network to approximate the $Q$ value-action function. We use Google TensorFlow to implement deep reinforcement learning in this paper to obtain the optimal IA user selection policy in cache-enabled opportunistic IA wireless networks. Simulation results are presented to show that the performance of cache-enabled opportunistic IA networks in terms of the network's sum rate and energy efficiency can be significantly improved by using the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call