Abstract
In this paper, the distributed edge caching problem with dynamic content recommendation is investigated in fog radio access networks (F-RANs). Firstly, the joint caching and recommendation policy is transformed into a `single' caching policy by incorporating the recommendation policy into the cache policy and the corresponding training complexity is halved. Considering that there is no existing user requests dataset involving content recommendation, we propose a time-varying personalized user request model to describe the fluctuant demands of each user after content recommendation. Then, to maximize the long-term net profit of each fog access point (F-AP), we formulate the caching optimization problem and resort to a reinforcement learning (RL) framework. Finally, to circumvent the curse of dimensionality of RL and speed up the convergence, we propose a double deep Q-network (DDQN) based distributed edge caching algorithm to find the optimal caching policy with content recommendation. Simulation results show that the average net profit of our proposed algorithm is increased by nearly half compared with the traditional methods. Besides, content recommendation could indeed accelerate the convergence and increase cache efficiency.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.