Abstract

Edge computing is a novel and potential computing model which moves storage and computing capabilities to the network edge, substantially decreasing service latency and network traffic. The existing Internet of Things (IoT) network offloading algorithm faces limitations such as a fixed number of applications, high edge-to-edge delay, and reliance on a single Mobile Edge Computing (MEC) server, posing security and privacy concerns. Moreover, resource-constrained mobile devices need more effective data integration and compression strategies. Addressing these challenges, this study suggests an approach based on deep reinforcement learning (DRL), specifically the SAV (State Action Value) Oriented QL (Q-Learning) based Task Offloading method, to optimise task offloading and resource allocation in edge-cloud computing. The model aims to empower Mobile Devices (MDs) to develop optimal offloading decisions for long-term Quality Perception, utilising a neural network to establish the relationship between MD state and action value. The paper introduces a Recurrent Extended Memory Network (REMN) to capture dynamic workload behaviour at Edge nodes (ENs). It incorporates Quality Mapping, Quality Estimation, and a Quality-Aware DRL Task Offloading Algorithm to improve the accuracy and efficiency of the offloading procedure in MEC systems. This systematic approach improves overall system performance and enables MDs to leverage ENs for neural network training, reducing computational burdens. As a result, it can accomplish a more significant number of tasks, reducing latency from 0.74 ms to 7.168 ms and decreasing energy consumption from 270 J to 1820.39 J for tasks ranging from 10 to 50, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call