Abstract
Mobile edge computing (MEC) enables computation intensive applications in the Internet of Vehicles (IoV) to no longer be limited by device resources. However, the lack of an effective task scheduling strategy will seriously affect users’ quality of experience (QoE). In this paper, a task type-based task offloading and resource allocation strategy is proposed to reduce delay and energy consumption during task execution. First, we establish communication, computing, and system cost models based on task offloading schemes, and model the joint optimization problem of task offloading and resource allocation as a Markov decision process. The utility function is obtained based on the task completion rate and the system cost. Second, an algorithm framework based on multi-agent deep deterministic policy gradient (MADDPG) is designed to solve the difficulty that traditional single-agent reinforcement learning algorithms are difficult to converge in a dynamic environment. In distributed scenarios, the proposed framework can also reduce system costs while handling more tasks. Finally, federated learning is introduced in the training process to reduce the impact of non-IID data while protecting privacy. Simulation results show that the proposed algorithm can effectively improve system processing efficiency and reduce device energy consumption compared to the popular reinforcement learning algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.