Abstract

Resource-constrained edge devices can not efficiently handle the explosive growth of mobile data and the increasing computational demand of modern-day user applications. Task offloading allows the migration of complex tasks from user devices to the remote edge-cloud servers thereby reducing their computational burden and energy consumption while also improving the efficiency of task processing. However, obtaining the optimal offloading strategy in a multi-task offloading decision-making process is an NP-hard problem. Existing Deep learning techniques with slow learning rates and weak adaptability are not suitable for dynamic multi-user scenarios. In this article, we propose a novel deep meta-reinforcement learning-based approach to the multi-task offloading problem using a combination of first-order meta-learning and deep Q-learning methods. We establish the meta-generalization bounds for the proposed algorithm and demonstrate that it can reduce the time and energy consumption of IoT applications by up to 15%. Through rigorous simulations, we show that our method achieves near-optimal offloading solutions while also being able to adapt to dynamic edge-cloud environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.