Abstract

Traditional networks are usually equipped with many dedicated middleboxes to provide various network services. Though these hardware-based devices certainly improve network performance, they are usually expensive and difficult to upgrade. To overcome this shortcoming, network function virtualization (NFV), which accomplishes network services in the form of virtual network functions (VNF) has been presented. Compared to middleboxes, the VNFs are easy to deploy and migrate. Usually, multiple VNFs are chained in a specified order as a service function chain (SFC) to serve a given flow. There are many works to schedule SFCs to minimize the average flow completion time. However, they only consider single resource limitation. In this paper, we are committed to addressing the problem of multi-resource SFC scheduling (MR-SFCS) and minimizing the average flow completion time. We formulate this problem with an Integer Linear Programming (ILP) model and prove its NP-hardness. To well tackle this problem, we propose an approach based on deep reinforcement learning (DRL), which has specific reward design and state representations. Besides, we extend the offline approach to online SFC scheduling. The experiment results demonstrate that our DRL method can significantly reduce the average flow completion time and achieves a cost saving of 69.07% against the benchmark method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call