Abstract

Network Function Virtualization (NFV) is a system that provides application services by connecting virtual network functions (VNFs), and is expected to accommodate new service requests through the development of new VNF and connection with existing ones. Because VNFs are implemented by software, their design and placement are important problems for the NFV system, which reduce the current and future system costs. In this article, we investigate the design principles and the placement policies that reduce the cost of designing and developing VNFs for accommodating new service requests. As for the design policy, we introduce a Core/Periphery-Based Design (CPBD) that utilizes the core/periphery concept for developing VNFs. In CPBD, “core” VNFs are developed in advance and repeatedly used to accommodate future service requests. While “core” VNFs are common to current and future service requests, “periphery” VNFs are developed and customized for each service request. Next, we investigate the placement policies of VNFs for CPBD to fully utilize the nature of their core/periphery structure. In addition, we examine the Center-Located Core/Periphery placement (CLCP) policy and the Geographically-Distributed Core/Periphery placement (GDCP) policy, and evaluate the long-term cost of the NFV system under resource restrictions to run VNFs. Our results show that CPBD reduces the long-term cost of design and development of VNFs by 23% compared to the design with no core VNFs. Moreover, in the case of no resource restrictions, both CLCP and GDCP reduce the long-term costs of placing and connecting VNFs by 15% compared to the existing VNF placement algorithm. With resource constraints, GDCP reduces the long-term costs over CLCP by 11%.

Highlights

  • As service demands become increasingly diverse, Network Function Virtualization (NFV) is gaining attention

  • We investigate how to place virtual network functions (VNFs) designed based on a core/periphery structure, as the deployment cost of VNFs can be reduced by appropriately placing the core VNFs in advance, so that they can be shared to accommodate future service requests

  • The upper number of service-chain requests that can use a VNF m placed at node v without processing overhead, Um,v(0), which is the maximum number of service-chain requests that can use a VNF m placed at node v, is set using a uniform random from the range [4, 40]

Read more

Summary

Introduction

As service demands become increasingly diverse, Network Function Virtualization (NFV) is gaining attention. VNFs can run on general-purpose hardware shared with other VNFs. NFV flexibly accommodates various service requests by connecting VNFs over networks. Many previous studies conducted on NFV have discussed placement algorithms that minimize the costs of placing VNFs [1], [2]. Nam et al [2] minimized the endto-end service time by placing VNFs based on Zipf’s law which models the frequency of VNFs use. These studies used different algorithms or approaches, they implicitly assumed that VNFs are developed in advance. In reality, service requests may change drastically and require VNFs that have not yet been developed.

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.