Abstract

With the widespread adoption of Internet of Thing (IoT) and the exponential growth in the volumes of generated data, cloud providers tend to receive massive waves of demands on their storage and computing resources. To help providers deal with such demands without sacrificing performance, the concept of cloud automation had recently arisen to improve the performance and reduce the manual efforts related to the management of cloud computing workloads. In this context, we propose in this paper, Deep learning Smart Scheduling (DSS), an automated big data task scheduling approach in cloud computing environments. DSS combines Deep Reinforcement Learning (DRL) and Long Short-Term Memory (LSTM) to automatically predict the Virtual Machines (VMs) to which each incoming big data task should be scheduled to so as to improve the performance of big data analytics and reduce their resource execution cost. Experiments conducted using real-world datasets from Google Cloud Platform show that our solution minimizes the CPU usage cost by 28.8% compared to the Shortest Job First (SJF), and by 14% compared to both the Round Robin (RR) and improved Particle Swarm Optimization (PSO) approaches. Moreover, our solution decreases the RAM memory usage cost by 31.25% compared to the SJF, by 25% compared to the RR, and by 18.78% compared to the improved PSO.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.