Abstract

In recent years, multi-access edge computing (MEC) has become a promising technology used in 5G networks based on its ability to offload computational tasks from mobile devices (MDs) to edge servers in order to address MD-specific limitations. Despite considerable research on computation offloading in 5G networks, this activity in multi-tier multi-MEC server systems continues to attract attention. Here, we investigated a two-tier computation-offloading strategy for multi-user multi-MEC servers in heterogeneous networks. For this scenario, we formulated a joint resource-allocation and computation-offloading decision strategy to minimize the total computing overhead of MDs, including completion time and energy consumption. The optimization problem was formulated as a mixed-integer nonlinear program problem of NP-hard complexity. Under complex optimization and various application constraints, we divided the original problem into two subproblems: decisions of resource allocation and computation offloading. We developed an efficient, low-complexity algorithm using particle swarm optimization capable of high-quality solutions and guaranteed convergence, with a high-level heuristic (i.e., meta-heuristic) that performed well at solving a challenging optimization problem. Simulation results indicated that the proposed algorithm significantly reduced the total computing overhead of MDs relative to several baseline methods while guaranteeing to converge to stable solutions.

Highlights

  • With the tremendous growth of the Internet of Things (IoTs), massive amounts of mobile devices (MDs), such as smart devices and virtual reality (VR) glasses, connect to networks and generate large amounts of data on communications networks

  • Recent studies have focused on computation offloading in 5G networks [5], with other research focusing on single-tier multi-access edge computing (MEC) servers used in heterogeneous scenarios [9,10,11,12,13,14,15,16]; achieving effective offloading decisions related to computation-completion time and energy consumption in a multi-tier MEC server remains an optimization problem that attracts considerable attention [17,18,19,20]

  • We considered multi-access edge computing in 5G heterogeneous networks (HetNets) comprising a set of small cell base stations (SBSs) connected to one macro-base stations (MBSs) (Figure 1)

Read more

Summary

Introduction

With the tremendous growth of the Internet of Things (IoTs), massive amounts of mobile devices (MDs), such as smart devices and virtual reality (VR) glasses, connect to networks and generate large amounts of data on communications networks. Recent studies have focused on computation offloading in 5G networks [5], with other research focusing on single-tier MEC servers used in heterogeneous scenarios [9,10,11,12,13,14,15,16]; achieving effective offloading decisions related to computation-completion time and energy consumption in a multi-tier MEC server remains an optimization problem that attracts considerable attention [17,18,19,20]. We addressed joint resource allocation and computation-offloading decisions in order to minimize the total computing overhead of MDs, including completion time and energy consumption.

Related Work
Scenario Description
Communication Model
Local Execution Model
Computation-Offloading Model
Problem Formulation and Analysis
Computational Resource Allocation
Computation-Offloading Decision
Simulation Settings
Simulation Results
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.