Abstract

With the rapid growth of computation demands from mobile applications, mobile-edge computing (MEC) provides a new method to meet requirement of high data rate and high computation capability. By offloading the latency-critical or computation-intensive tasks to the edge server, mobile devices (MDs) could save energy consumption and extend battery life. However, unlike cloud servers, resource bottlenecks in MEC servers limit the scalability of offloading. Hence, computation offloading and resource allocation need to be optimized. Toward this end, we consider a multi-access MEC servers system in which Orthogonal Frequency-Division Multiplexing Access (OFDMA) is used as the transmission mechanism for uplink. In order to minimize energy consumption of MDs, we propose a joint optimization strategy for computation offloading, subcarrier allocation, and computing resource allocation, which is a mixed integer non-linear programming (MINLP) problem. First, we design a bound improving branch-and-bound (BnB) algorithm to find the global optimal solution. Then, we present a combinational algorithm to obtain the suboptimal solution for practical application. Simulation results reveal that the combinational algorithm performs very closely to the BnB algorithm in energy saving, but it has a better performance in average algorithm time. Furthermore, our proposed solutions outperform other benchmark schemes.

Highlights

  • With the popularity of smart mobile devices (MDs) like intelligent mobile phones, smart watch/band and Internet of Things (IoT) devices such as shared power supply, and shared bike, lots of new mobile applications come with the tide of fashion [1], [2]

  • We assume that the central processing units (CPUs) of the mobile-edge computing (MEC) servers are unoccupied at the present time

  • The simulation results show that the performance of the proposed Combinational Joint Computation Offloading and Resource Allocation (CJCORA) algorithm is very close to the optimal BnB algorithm, and its performance is significantly improved compared to other algorithms

Read more

Summary

INTRODUCTION

With the popularity of smart mobile devices (MDs) like intelligent mobile phones, smart watch/band and Internet of Things (IoT) devices such as shared power supply, and shared bike, lots of new mobile applications come with the tide of fashion [1], [2]. For Ultra-Dense Networks in future 5G network, [8] considered a multi-access MEC scenario and proposed a heuristic greedy offloading scheme to solve computation offloading problem Both wireless and computing resources affect the performance of offloading strategy. Paper [21] solved the problem of joint computation offloading and user association in the multi-task MEC system, where the allocation of computation resources and transmission power is considered, so as to save the overall energy consumption of the system. We consider a MEC system with multiple MEC servers serving multiple MDs, and propose optimal and sub-optimal algorithms for the joint optimization of the offloading decision, wireless resources allocation, and computation resources allocation to minimize the energy consumption of MDs under the constraint of delay, so that save energy for devices.

SYSTEM MODEL
MDs LOCAL COMPUTATION
MEC REMOTE COMPUTATION
MDs QoE
JOINT COMPUTATION OFFLOADING AND RESOURCE
PROBLEM TRANSFORMATION
PROPOSED ALGORITHM
OPTIMAL BRANCH-AND-BOUND ALGORITHM
SUBOPTIMAL INTELLIGENT HEURISTIC ALGORITHM
PERFORMANCE EVALUATIONS
Findings
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.