Abstract

To achieve the service-oriented features of the 5G, network slicing aims to create logical virtual networks where multiple services are provided on a common physical infrastructure. The performance of network slicing depends on the intelligent management of multi-dimensional resources, which are exactly what multi-access edge computing (MEC) provides. This paper proposes joint optimization of communication, computing and caching (3C) resources in multi-access edge network slicing. The optimization objective of the two-level resource allocation problem is to maximize the utility obtained by mobile virtual network operators while ensuring the quality of service (QoS). The deep reinforcement learning (DRL) approach is employed which enables the resource allocation scheme to intelligently adapt to the dynamic environment. Specifically, we propose a novel DRL approach named twin-actor deep deterministic policy gradient (twin-actor DDPG). Since the action space is continuous, the DDPG is adopted where the actor generates the deterministic policy while the critic evaluates the policy and guides the actor to obtain the optimal policy. A novel twin-actor structure is put forward to replace the actor of the DDPG, thus the slice-level action and user-level action can be generated respectively. The convergence and effectiveness of the proposed DRL based algorithm is are verified by numerical simulation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call