Abstract

Virtual network embedding (VNE), as the key challenge of network resource management technology, lies in the contradiction between online embedding decision and pursuing long-term average revenue goals. Most of the previous work ignored the dynamics in Virtual Network (VN) modeling, or could not automatically detect the complex and time-varying network state to provide a reasonable network embedding scheme. In view of this, we model a network embedding framework where the topology and resource allocation change dynamically with the number of network users and workload, and then introduce a deep reinforcement learning method to solve the VNE problem. Further, a dynamic virtual network embedding algorithm based on Deep Reinforcement Learning (DRL), named DVNE-DRL, is proposed. In DVNE-DRL, VNE is modeled as a Markov Decision Process (MDP), and then deep learning is introduced to perceive the current network state through historical data and embedded knowledge, while utilizing reinforcement learning decision-making capabilities to implement the network embedding process. In addition, we improve the method of feature extraction and matrix optimization, and consider the characteristics of virtual network and physical network together to alleviate the problem of redundancy and slow convergence. The simulation results show that compared with the existing advanced algorithms, the acceptance rate and average revenue of DVNE-DRL are increased by about 25% and 35%, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call