Abstract

With the development of edge-cloud computing technologies, distributed data centers (DCs) have been extensively deployed across the global Internet. Since different users/applications have heterogeneous requirements on specific types of ICT resources in distributed DCs, how to optimize such heterogeneous resources under dynamic and even uncertain environments becomes a challenging issue. Traditional approaches are not able to provide effective solutions for multi-dimensional resource allocation that involves the balanced utilization across different resource types in distributed DC environments. This paper presents a reinforcement learning based approach for multi-dimensional resource allocation (termed as NESRL-MRM) that is able to achieve balanced utilization and availability of resources in dynamic environments. To train NESRL-MRM’s agent with sufficiently quick wall-clock time but without the loss of exploration diversity in the search space, a natural evolution strategy (NES) is employed to approximate the gradient of the reward function. To realistically evaluate the performance of NESRL-MRM, our simulation evaluations are based on real-world workload traces from Amazon EC2 and Google datacenters. Our results show that NESRL-MRM is able to achieve significant improvement over the existing approaches in balancing the utilization of multi-dimensional DC resources, which leads to substantially reduced blocking probability of future incoming workload demands.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call