Abstract

Network Virtualization (NV) techniques allow multiple virtual network requests to beneficially share resources on the same substrate network, such as node computational resources and link bandwidth. As the most famous family member of NV techniques, virtual network embedding is capable of efficiently allocating the limited network resources to the users on the same substrate network. However, traditional heuristic virtual network embedding algorithms generally follow a static operating mechanism, which cannot adapt well to the dynamic network structures and environments, resulting in inferior nodes ranking and embedding strategies. Some reinforcement learning aided embedding algorithms have been conceived to dynamically update the decision-making strategies, while the node embedding of the same request is discretized and its continuity is ignored. To address this problem, a Continuous-Decision virtual network embedding scheme relying on Reinforcement Learning (CDRL) is proposed in our paper, which regards the node embedding of the same request as a time-series problem formulated by the classic seq2seq model. Moreover, two traditional heuristic embedding algorithms as well as the classic reinforcement learning aided embedding algorithm are used for benchmarking our prpposed CDRL algorithm. Finally, simulation results show that our proposed algorithm is superior to the other three algorithms in terms of long-term average revenue, revenue to cost and acceptance ratio.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call