Abstract

With the rapid development of Internet of Things (IoT) and next-generation communication technologies, resource-constrained mobile devices (MDs) fail to meet the demand of resource-hungry and compute-intensive applications. To cope with this challenge, with the assistance of mobile-edge computing (MEC), offloading complex tasks from MDs to edge cloud servers (CSs) or central CSs can reduce the computational burden of devices and improve the efficiency of task processing. However, it is difficult to obtain optimal offloading decisions by conventional heuristic optimization methods, because the decision-making problem is usually NP-hard. In addition, there are shortcomings in using intelligent decision-making methods, e.g., lack of training samples and poor ability of migration under different MEC environments. To this end, we propose a novel offloading algorithm named meta reinforcement-deep reinforcement learning-based offloading, consisting of a meta-reinforcement learning (meta-RL) model, which improves the migration ability of the whole model, and a deep reinforcement learning (DRL) model, which combines multiple parallel deep neural networks (DNNs) to learn from historical task offloading scenarios. Simulation results demonstrate that our approach can effectively and efficiently generate near-optimal offloading decisions in IoT environments with edge and cloud collaboration, which further improves the computational performance and has strong portability when making offloading decisions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call