Abstract

Offloading cellular traffic via Device-to-Device communication (or D2D offloading) has been proved to be an effective way to ease the traffic burden of cellular networks. However, mobile nodes may not be willing to take part in D2D offloading without proper financial incentives since the data offloading process will incur a lot of resource consumption. Therefore, it is imminent to exploit effective incentive mechanisms to motivate nodes to participate in D2D offloading. Furthermore, the design of the content caching strategy is also crucial to the performance of D2D offloading. In this paper, considering these issues, a novel Incentive-driven and Deep Q Network (DQN) based Method, named IDQNM is proposed, in which the reverse auction is employed as the incentive mechanism. Then, the incentive-driven D2D offloading and content caching process is modeled as Integer Non-Linear Programming (INLP), aiming to maximize the saving cost of the Content Service Provider (CSP). To solve the optimization problem, the content caching method based on a Deep Reinforcement Learning (DRL) algorithm, named DQN is proposed to get the approximate optimal solution, and a standard Vickrey-Clarke-Groves (VCG)-based payment rule is proposed to compensate for mobile nodes' cost. Extensive real trace-driven simulation results demonstrate that the proposed IDQNM greatly outperforms other baseline methods in terms of the CSP's saving cost and the offloading rate in different scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call