This paper investigates the computation offloading problem in a high mobility internet of vehicles (IoVs) environment, aiming to guarantee latency, energy consumption, and payment cost requirements. Both moving and parked vehicles are utilized as fog nodes. Vehicles in high mobility environments need collaborative interactions in a decentralized manner for better network performances, where agent action space grows exponentially with the number of vehicles. The vehicular mobility introduces additional dynamicity in the network, and the learning agent requires a joint cooperative behavior for establishing convergence. The traditional deep reinforcement learning (DRL)-based offloading in IoV ignores other agent’s actions during the training process as an independent learner, which makes a lack of robustness against the high mobility environment. To overcome it, we develop a cooperative three-layer, more generic decentralized vehicle-assisted multi-access edge computing (VMEC) network, where vehicles in associated RSU and neighbor RSUs are in the bottom fog layer, MEC servers are in the middle cloudlet layer, and cloud in the top layer. Then multi-agent DRL-based Hungarian algorithm (MADRLHA) in the bipartite graph maximum matching problem is applied to solve dynamic task offloading in VMEC. Extensive experimental results and comprehensive comparisons are conducted to illustrate the superiority of our proposed method.
Read full abstract