Abstract

Combined with wireless power transfer (WPT) technology, mobile edge computing can provide continuous energy supply and computing resources for mobile devices, and improve their battery life and business application scenarios. This article first designs the mobile edge computing (MEC) model of mobile devices with random mobility and hybrid access point (HAP) with data transmission and energy transmission. On this basis, the selection of target server and the amount of data offloading are taken as the learning objectives, and the task offloading strategy based on multi-agent deep reinforcement learning is constructed. Then combined with MADDPG algorithm and SAC algorithm, the problems of multi-agent environment instability and the difficulty of convergence are solved. The final experimental results show that the improved algorithm based on MADDPG and SAC has good stability and convergence. Compared with other algorithms, it has achieved good results in energy consumption, delay and task failure rate.

Highlights

  • With the rapid development and widespread popularity of the Internet of Things (IoT) technology, cloud computing has been unable to meet the demand in business scenarios where the amount of data collection is too large, immediate and continuity interaction is required, such as online games, realtime streaming media, and augmented reality [1]

  • The mobile edge computing (MEC) builds an open platform for data collection, data processing and data analyzing at the edge of the network, so that mobile devices can actively offload computing tasks to edge servers, thereby reducing service response time, improving device battery life, ensuring data security and user privacy [2]

  • In order to satisfy the information download request and wireless charging request of mobile devices, hybrid access point (HAP) can realize wireless information transmission (WIT) and wireless power transfer (WPT) in the same frequency spectrum based on the broadcast characteristics of radio frequency (RF) signal and wireless channel

Read more

Summary

INTRODUCTION

With the rapid development and widespread popularity of the Internet of Things (IoT) technology, cloud computing has been unable to meet the demand in business scenarios where the amount of data collection is too large, immediate and continuity interaction is required, such as online games, realtime streaming media, and augmented reality [1]. We propose a strategy that uses multi-agent deep reinforcement learning to solve the problem of how much and where to offload the tasks in the MEC by comprehensively considering the computing performance, the signal range, the geographic location of the edge server, and the computing performance, remaining capacity of battery, energy transmission, location information, application data amount of the mobile device. It effectively reduces the energy consumption, delay and task failure rate of mobile devices and edge servers, and improves the service quality of the entire MEC platform.

RELATED WORK
SYSTEM MODEL
ALGORITHM DESIGN
MDP MODEL
METHODOLOGY
CONCLUTION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call