Abstract
Mobile edge computing (MEC) and cloud computing (CC) have been considered as the key technologies to improve the task processing efficiency for Internet of Vehicles (IoV). In this article, we consider a random traffic flow and dynamic network environment scenario where MEC and CC are collaborated for processing delay-sensitive and computation-intensive tasks in IoV. We study the joint optimization of computation offloading and resource allocation (CORA) with the objective of minimizing the system cost of processing tasks subject to the processing delay and transmission rate constraints. To attack the challenges brought by the dynamic environment, we use the Markov decision process model for formulating the dynamic optimization problem, and apply a deep reinforcement learning (DRL) technique to deal with high-dimensional and continuous states and action spaces. Then, we design a CORA algorithm, which is able to effectively learn the optimal scheme by adapting to the network dynamics. Extensive simulation experiments are conducted, in which we compare the CORA algorithm with both non-DRL algorithms and DRL algorithms. The experimental results show that the CORA algorithm outperforms others with excellent training convergence and performance in processing delay and processing cost.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.