This article studies the problem of virtual network (VNT) slicing in datacenter interconnections (DCIs), and proposes a novel service framework to better balance the tradeoff between cost-effectiveness, and time-efficiency. Our idea is to partition a DCI into non-overlapped subgraphs, divide the VNT slicing in each subgraph into four collaborative steps, and get tenants involved in the calculation of virtual network embedding (VNE) schemes. With our proposal, an agent of infrastructure provider (InP) leverages deep reinforcement learning (DRL) to price, and advertise the substrate resources in one subgraph, motivates tenants to request resources in a load-balanced manner, and accepts VNE schemes from the tenants to avoid resource conflicts. Meanwhile, the tenants' task is to compute their own VNE schemes independently, and distributedly according to the resource information (i.e., the available resources, and their prices) advertised by the agent. We first design the DRL model based on the deep deterministic policy gradient (DDPG), and develop a VNT compression method based on auto-encoder (AE) to generalize the DRL's operation. Then, we study how to resolve resource conflicts among the distributedly-calculated VNE schemes, build a conflict graph (CG) to transform the VNE selection into finding the maximum weighted independent set (MWIS) in the CG, and design a polynomial-time approximation algorithm to solve the problem. Extensive simulations confirm that compared with the centralized service framework relying solely on the InP for VNE calculation, our proposed DRL-assisted distributed framework provisions VNT requests with significantly shorter computation time, and comparable blocking performance.
Read full abstract