Abstract

With the advent of Network Function Virtualization (NFV) and Mobile Edge Computing (MEC), outsourcing network functions (NFs) to the MEC is becoming popular among network service providers (NSPs), since it brings the scalability and flexibility for NF deployment and maintenance. Each user’s request will go through a service function chain (SFC), which consists of several virtual network functions (VNFs, software substitutions for traditional hardware-based middleboxes) in a specific order, and then get a response. Unlike conventional hardware-based middleboxes, VNFs are not very reliable due to potential software faults and host malfunctions. Thus, a sensible way is to add redundancy for the primary VNFs of an SFC to enhance its availability. Nevertheless, which MEC node to place each VNF, and how many backup instances are enough to ensure the availability requirement of each SFC? These issues have not yet been resolved. In this article, we present the availability-aware provision of SFC (APoS) in the MEC environment with the primary goal of maximizing the number of served requests while meeting the requirements and reliability expectations of SFCs. For the APoS, we have primarily addressed the following two fundamental challenges: (i) First, how to efficiently map these primary and backup VNFs to meet the availability requirements of SFCs? At this point, we formulate it as an integer nonlinear programming (INLP) under the limitation of each MEC node’s resources. This issue is NP-hard, and a novel binary N-back search method is proposed to derive the optimal solution for the primary and backup VNFs mapping; (ii) Second, how can we reduce the latency for users to access their desired SFCs? Then, we investigate how to minimize the average delay for all requests in each time slot. To solve this problem, we design an online service switching (OSS) method, which jointly considers the queuing delay, communication delay, and switching delay. It achieves the optimal solution with a theoretical guarantee. Finally, we evaluate the proposed methods with real-world datasets. The results demonstrate that, compared with the benchmarks, our practices can achieve approximately 20% request acceptance gain and up to 30% delay reduction, on average.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call