Abstract

Computation caching is a novel strategy to improve the performance of computation offloading in wireless networks endowed with edge cloud or fog computing capabilities. It consists in preemptively storing in caches located at the edge of the network the results of computations that users offload to the edge cloud. The goal is to avoid redundant and repetitive processing of the same tasks, thus streamlining the offloading process and improving the exploitation of both the users’ and the network’s resources. In this paper, a novel computation caching policy is defined, investigated, and benchmarked against state-of-the-art solutions. The proposed new policy is built on three characterizing parameters of offloadable computational tasks: popularity, input size, and output size. This work proves the crucial importance of including these parameters altogether in the design of efficient policies. Our proposed policy has low computational complexity and is numerically shown to achieve optimality for several performance indicators and to yield significantly better results compared to the other analyzed policies. This is shown in both a single-and a multi-cell scenario, where a serving small cell has access to its neighboring cells’ caches via backhaul. In this paper, the benefits of computation caching are highlighted and estimated through extensive numerical simulations in terms of reduction of uplink traffic, communication and computation costs, offloading delay, and computational resource outage.

Highlights

  • The future of mobile communications will be characterized by ubiquitous connection availability, very dense networks, ultra-low latency, energy efficiency, and an extremely fast and copious exchange of data and information

  • There is no privileged order of magnitude among the values taken by |Wk | and |Wk |, even when the maximum possible value is much bigger than the minimum

  • WORK In this paper, we studied computation caching, which combines caching with the problem of dealing with the costs of producing the data to cache

Read more

Summary

Introduction

The future of mobile communications will be characterized by ubiquitous connection availability, very dense networks, ultra-low latency, energy efficiency, and an extremely fast and copious exchange of data and information. A game-changing idea for the 5G revolution consists in empowering network mobile terminals with data elaboration and storage capabilities, bringing cloud support the closest possible to users This paradigm is called Multi-access Edge Computing (MEC) [22], [23], [39], known as mobile edge cloud or computing. SSCs can be entrusted by a User Equipment (UE) with computational assignments to run on its behalf through a procedure called task or computation offloading [2], [3], [35], [38] This revolutionizes the classical interaction between UEs and network access points, allowing UEs to both save energy and meet the tight latency constraints that will characterize many 5G services and use cases

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.