Abstract

This paper studies the network lifetime optimization problem in a multi-user mobile edge computing (MEC)-enabled Internet of things (IoT) system comprising an access point (AP), a MEC server, and a set of K mobile devices (MDs) with limited battery capacity. Considering the residual battery energy at the MDs, stochastic task arrivals, and time-varying wireless fading channels, a soft actor-critic (SAC)-based deep reinforcement learning (DRL) lifetime maximization, called DeepLM, is proposed to jointly optimize the task splitting ratio, the local CPU-cycle frequencies at the MDs, the bandwidth allocation, and the CPU-cycle frequency allocation at the MEC server subject to the task queuing backlogs constraint, the bandwidth constraint, and maximum CPU-cycle frequency constraints at the MDs and the MEC server. Our results reveal that DeepLM enjoys a fast convergence rate and a small oscillation amplitude. We also compare the performance of DeepLM with three benchmark offloading schemes, namely, fully edge computing (FEC), fully local computing (FLC), and random computation offloading (RCO). DeepLM increases the network lifetime by 496% and 229% compared to the FLC and RCO schemes. Interestingly, it achieves such a colossal lifetime improvement when its non-backlog probability is 0.99, while that of FEC, FLC, and RCO is 0.69, 0.53, and 0.25, respectively, showing a significant performance gain of 30%, 46%, and 74%.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.