Abstract

In traditional vehicle edge computing research, the limited computing resources of edge servers are not fully considered. At the same time, when using deep reinforcement learning (DRL) to solve vehicle edge computing problems, it is not fully considered that DRL requires a large amount of real-time data and is prone to gradient explosion or disappearance, over-fitting and local optimal solution problems. Therefore, this article proposes a digital twin (DT)-assisted vehicular cloud edge computing offloading solution. To solve the problem of limited edge server computing resources, this article proposes a vehicle, cloud, and edge server collaboration solution. In response to the problem that DRL requires a large amount of real-time data, this paper proposes a real-time data acquisition method based on DT technology. Aiming at the gradient explosion or disappearance, overfitting, and local optimal solution problems of the asynchronous dominant actor–critic (A3C) algorithm, this paper proposes an improved algorithm of A3C, which introduces the ɛ-greedy strategy and dynamic baseline to increase the possibility of exploration and increase the generalization ability of the model through Dropout technology. During the training process, this article also uses the stochastic gradient descent (SGD) algorithm to accelerate the training process and reduce the computational complexity. The simulation results show that compared with other solutions, the improved algorithm proposed in this paper can effectively reduce the delay and energy consumption of offloading tasks at the vehicular edge cloud.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call