Abstract

AbstractCloud computing has recently attracted both academics and industrialists in the field of research. Virtualization allows cloud service providers (CSPs) with their own data centers to supply infrastructures, resources, and services to users by converting real servers into virtual machines (VMs). Profit‐driven CSPs charge users for VM leasing and service access while reducing energy consumption to increase profits. But CSPs even face challenges like minimizing the energy cost for the data center. Several different algorithms were introduced for minimizing the energy cost by using task scheduling (TS) and/or resource provisioning. However, scalability issues were encountered, or TS with task dependencies were not considered, which is a critical factor in assuring exact parallel execution of tasks in parallel. This article introduces a novel artificial algorithm, called deep reinforcement Q‐learning for resource scheduling which integrates the features of the Q‐learning and reinforcement learning approaches. The objective of this new approach is to provide a solution to the problem of handling energy consumption in a cloud computing environment. Based on advancements in WorkflowSim, experiments are carried out comparatively by considering the variance of make‐span, time, cost analysis, deadline overflow, and load balance in resource scheduling. The proposed method tends to be effective in terms of cost, energy consumption, resource utilization, and response time. The resource reuse capability of the proposed methodology is 63% higher when compared to the modified particle swarm optimization and modified cat swarm optimization technique. The task approval rate of the proposed methodology is 54% higher than the crow search‐based load balancing algorithm and 50% higher than the task duplication‐based scheduling algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call