Abstract

Mobile-edge computing (MEC) with wireless power transfer has recently emerged as a viable concept for improving the data processing capacity of limited powered networks like wireless sensor networks (WSN) and the internet of things (IoT). In this research, we explore a wireless MEC network with a binary offloading strategy. Each mobile device’s (MDs) computation task is either performed locally or entirely offloaded to a MEC server. We aim to develop an online system that adapts task offloading decisions and resource allocations to changing wireless channel conditions in real-time. This necessitates solving difficult combinatorial optimization problems quickly within the channel coherence time, which is difficult to achieve with traditional optimization approaches. To address this issue, we offer a parallel computing architecture in which several parallel offloading actors as deep neural networks (DNNs) are used as a scalable method to learn binary offloading decisions from experience. It avoids the need to solve combinatorial optimization issues, reducing computational complexity significantly, especially in large networks. Compared to existing optimization approaches, numerical results demonstrate that the proposed algorithm can achieve optimal performance while reducing computing time by an acceptable margin. For instance, our algorithm achieves a latency rate of 0.033 s in a network of 30 MDs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call