Abstract
This paper investigates the computation offloading problem in a high mobility internet of vehicles (IoVs) environment, aiming to guarantee latency, energy consumption, and payment cost requirements. Both moving and parked vehicles are utilized as fog nodes. Vehicles in high mobility environments need collaborative interactions in a decentralized manner for better network performances, where agent action space grows exponentially with the number of vehicles. The vehicular mobility introduces additional dynamicity in the network, and the learning agent requires a joint cooperative behavior for establishing convergence. The traditional deep reinforcement learning (DRL)-based offloading in IoV ignores other agent’s actions during the training process as an independent learner, which makes a lack of robustness against the high mobility environment. To overcome it, we develop a cooperative three-layer, more generic decentralized vehicle-assisted multi-access edge computing (VMEC) network, where vehicles in associated RSU and neighbor RSUs are in the bottom fog layer, MEC servers are in the middle cloudlet layer, and cloud in the top layer. Then multi-agent DRL-based Hungarian algorithm (MADRLHA) in the bipartite graph maximum matching problem is applied to solve dynamic task offloading in VMEC. Extensive experimental results and comprehensive comparisons are conducted to illustrate the superiority of our proposed method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.