Abstract

In the 5G/6G era of networking, computational offloading, i.e., the act of transferring resource-intensive computational tasks to separate external devices in the network proximity, constitutes a paradigm shift for mobile task execution on Edge Computing infrastructures. However, in order to provide firm Quality of Service (QoS) assurances for all the involved users, meticulous planning of the offloading decisions should be made, which potentially involves inter-site task transferring. In this paper, we consider a multi-user, multi-site Multi-Access Edge Computing (MEC) infrastructure, where mobile devices (MDs) can offload their tasks to the available edge sites (ESs). Our goal is to minimize end-to-end delay and energy consumption, which constitute the sum cost of the considered system, and comply with the MDs’ application requirements. To this end, we introduce a two-stage Reinforcement Learning (RL)-based mechanism, where the MDs-to-ES task offloading and the ES-to-ES task transferring decisions are iteratively optimized. The proper operation, effectiveness and efficiency of our proposed offloading mechanism is assessed under various evaluation scenarios.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.