Abstract

The role of fog computing in future vehicular networks is becoming significant, enabling a variety of applications that demand high computing resources and low latency, such as augmented reality and autonomous driving. Fog-based computation offloading and service caching are considered two key factors in efficient execution of resource-demanding services in such applications. While some efforts have been made on computation offloading in fog computing, a limited amount of work has considered joint optimization of computation offloading and service caching. As fog platforms are usually equipped with moderate computing and storage resources, we need to judiciously decide which services to be cached when offloading computation tasks to maximize the system performance. The heterogeneity, dynamicity, and stochastic properties of vehicular networks also pose challenges on optimal offloading and resource allocation. In this paper, we propose an intelligent computation offloading architecture with service caching, considering both peer-pool and fog-pool computation offloading. An optimization problem of joint computation offloading and service caching is formulated to minimize the task processing time and long-term energy utilization. Finally, we propose an algorithm based on deep reinforcement learning to solve this complex optimization problem. Extensive simulations are undertaken to verify the feasibility of our proposed scheme. The results show that our proposed scheme exhibits an effective performance improvement in computation latency and energy consumption compared to the chosen baseline.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call