Abstract

Mobile edge computing becomes a promising technology to mitigate the latency of various cloud services. In addition, network function virtualization (NFV) has been shown a great potential in reducing the operational cost of cloud services while enhancing the flexibility of virtual network function deployments, by implementing dedicated hardware network functions as pieces of software in generic servers. Recently, the GPU acceleration has been investigated to speed up flow processing in virtual network functions (VNFs), by leveraging the parallelism of GPUs. VNFs that need accelerations prefer to stay at cloudlets (locations) equipped with GPUs. However, little attention has been paid for the VNF placement that takes into account GPU-affinity in cloudlets of mobile edge clouds. In this paper, we consider the affinity-aware throughput maximization problem in a mobile edge cloud via leveraging the parallelism on GPUs for user requests with VNF requirements. We consider two types of affinities in the VNF placement: The <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">soft-affinity</i> that allows VNFs to be executed by either CPUs or GPUs in cloudlets; and the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">hard-affinity</i> that only allows VNFs to be placed to the GPUs of a specified set of cloudlets. We formulate two corresponding VNF placement problems in a mobile edge cloud. Specifically, we first propose an exact solution to the soft-affinity throughput maximization problem by formulating an Integer Linear Program (ILP). We then propose an efficient algorithm for the problem, by proposing a randomized algorithm with a provable approximation ratio for the hard-affinity-aware throughput maximization problem and extending the proposed approximation algorithm to the soft-affinity throughput maximization problem. Furthermore, assuming that user requests arrive into the mobile edge cloud one by one without the knowledge of future arrivals, we devise an online algorithm with a good competitive ratio for this dynamic hard-affinity-aware throughput maximization problem. Finally, we evaluate the performance of the proposed algorithms, through simulations and implementations in a real test-bed. Experimental results show that the performance of the proposed algorithms outperform their existing counterparts and achieve higher throughput.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.