Abstract
Network Function Virtualisation (NFV) enables rapid deployment of new services in networks on an on-demand basis using general purpose servers. Multiple virtual network functions (VNFs) can be dynamically chained in an ordered sequence for the delivery of end-to-end services. Nevertheless, network latency caused by the sequential order of packet processing on every VNF can hurt the performance of latency-sensitive applications. To reduce such network latency, existing solutions only consider the maximum capacity of individual virtual network functions (VNFs) and do not take into account the fact that performance of VNFs, as with any software applications, is bottlenecked by either CPU or I/O peripheral capacity of the server they run on and their underneath implementation such as singleor multi-threaded.By exploiting this knowledge, we can better determine the number of required VNF instances and distribute the network traffic among them for any given VNF chain. In this paper, we formulate the VNF Scaling and Traffic Distribution problem and prove that it is NP-hard. We then present the design and implementation of Natif, an efficient VNF-Aware VNF insTantIation and traFfic distribution scheme. Through our OpenStack-based testbed evaluations, we demonstrate that Natif can significantly improve the network latency by 188% on average as compared to other approaches. As a chain composition scheme, Natif can effectively work with any VNF chaining algorithms.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.