Abstract

Most studies on task scheduling in the cloud consider at most two or three objectives and hence the main motivation of this paper is to design the task scheduling problem by considering conflicting objectives (i.e., makespan, energy consumption, resources utilization, and security). The proposed algorithm consists of two stages (i.e., meta-scheduler and local-scheduler). In the meta-scheduler stage, the tasks are assigned to hosts based on their priorities, deadlines, and the power of hosts. In the local-scheduler stage, the optimal mapping between tasks and virtual machines is obtained with the proposed Parallel Reinforcement Learning Caledonian Crow (PRLCC). The proposed PRLCC is a combination of the New Caledonian Crow Learning Algorithm (NCCLA), Reinforcement Learning (RL), and parallel strategy. The RE is utilized to guide the agent's activity and provide a balance between intensification and diversification activities, whereas parallel strategy is employed to help agents for searching different directions of problems in the shortest time. The first experiment tries to evaluate the proposed PRLCC as a global optimizer by 20 test functions (i.e., 8 unimodal and 12 multimodal). The results demonstrate the PRLCC's robustness, efficiency, and stability. The second experiment compares the performance of the proposed scheduler with four scheduling algorithms. In a heavily (lightly) loaded system, it improves waiting time by 32.5% (1.4%), energy consumption by 81% (75%), and resource utilization by 7.5% (3.5%) on average compared to other methods, also guaranteed security by 65.5% (84%).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call