Abstract
Task offloading combined with reinforcement learning (RL) is a promising research direction in edge computing. However, the intractability in the training of RL and the heterogeneity of network devices have hindered the application of RL in large-scale networks. Moreover, traditional RL algorithms lack mechanisms to share information effectively in a heterogeneous environment, which makes it more difficult for RL algorithms to converge due to the lack of global information. This article focuses on the task offloading problem in a heterogeneous environment. First, we give a formalized representation of the Lyapunov function to normalize both data and virtual energy queue operations. Subsequently, we jointly consider the computing rate and energy consumption in task offloading and then derive the optimization target leveraging Lyapunov optimization. A Deep Deterministic Policy Gradient(DDPG)-based multiple continuous variable decision model is proposed to make the optimal offloading decision in edge computing. Considering the heterogeneous environment, we improve Hetero Federated Learning (HFL) by introducing Kullback-Leibler (KL) divergence to accelerate the convergence of our DDPG based model. Experiments demonstrate that our algorithm accelerates the search for the optimal task offloading decision in heterogeneous environment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Network and Service Management
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.