Abstract

Driven by the construction of smart cities, networks and communication technologies are gradually infiltrating into the Internet of Things (IoT) applications in urban infrastructure, such as automatic driving. In the Internet of Vehicles (IoV) environment, intelligent vehicles will generate a lot of data. However, the limited computing power of in-vehicle terminals cannot meet the demand. To solve this problem, we first simulate the task offloading model of vehicle terminal in Mobile Edge Computing (MEC) environment. Secondly, according to the model, we design and implement a MEC server collaboration scheme considering both delay and energy consumption. Thirdly, based on the optimization theory, the system optimization solution is formulated with the goal of minimizing system cost. Because the problem to be resolved is a mixed binary nonlinear programming problem, we model the problem as a Markov Decision Process (MDP). The original resource allocation decision is turned into a Reinforcement Learning (RL) problem. In order to achieve the optimal solution, the Deep Reinforcement Learning (DRL) method is used. Finally, we propose a Deep Deterministic Policy Gradient (DDPG) algorithm to deal with task offloading and scheduling optimization in high-dimensional continuous action space, and the experience replay mechanism is used to accelerate the convergence and enhance the stability of the network. The simulation results show that our scheme has good performance optimization in terms of convergence, system delay, average task energy consumption and system cost. For example, compared with the comparison algorithm, the system cost performance has improved by 9.12% under different task sizes, which indicates that our scheme is more suitable for highly dynamic Internet of Vehicles environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call