Abstract

Mobile edge computing (MEC) and cloud computing (CC) have been considered as the key technologies to improve the task processing efficiency for Internet of Vehicles (IoV). In this article, we consider a random traffic flow and dynamic network environment scenario where MEC and CC are collaborated for processing delay-sensitive and computation-intensive tasks in IoV. We study the joint optimization of computation offloading and resource allocation (CORA) with the objective of minimizing the system cost of processing tasks subject to the processing delay and transmission rate constraints. To attack the challenges brought by the dynamic environment, we use the Markov decision process model for formulating the dynamic optimization problem, and apply a deep reinforcement learning (DRL) technique to deal with high-dimensional and continuous states and action spaces. Then, we design a CORA algorithm, which is able to effectively learn the optimal scheme by adapting to the network dynamics. Extensive simulation experiments are conducted, in which we compare the CORA algorithm with both non-DRL algorithms and DRL algorithms. The experimental results show that the CORA algorithm outperforms others with excellent training convergence and performance in processing delay and processing cost.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call