Abstract

The advancement of Internet of Vehicles has brought out various vehicular applications, and some of the applications are computation-intensive or delay-sensitive. Vehicular fog computing (VFC) can provide low-latency computing service, where vehicles can share their idle computing resource among each other. However, due to the high dynamic vehicular environment, both the vehicle-to-vehicle (V2V) communication link and onboard idle computing resource are time-variant. How to ensure the efficiency of V2V computation offloading under such environment is one main challenge. In this work, we design a multi-task offloading model that considers the dynamic of V2V communication link and the computing resource allocation in vehicles. In order to make the policy of task offloading adapt to the dynamic environment, we propose a deep reinforcement learning (DRL)-based V2V computation offloading algorithm. Moreover, the experiences of computation offloading in a single vehicle is generally limited, and the training process consumes much energy and time. To reduce the workload of model training in vehicles and protect the privacy of vehicles in terms of computation offloading, we further propose a federated DRL algorithm based on double deep Q-network (DDQN), where the DDQN model is trained cooperatively in multiple vehicles. The numerical simulation results validate the convergence of the proposed federated DDQN algorithm, and reveal that the proposed algorithm is more efficient in computation offloading compared with other baseline algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call