Workflow Scheduling With Guaranteed Responsiveness and Minimal Cost
ScholarWorks@UMBC digital repository on the Maryland Shared Open Access (MD-SOAR) platform.
- Research Article
223
- 10.1016/j.parco.2017.01.002
- Jan 25, 2017
- Parallel Computing
A hybrid multi-objective Particle Swarm Optimization for scientific workflow scheduling
- Research Article
122
- 10.1016/j.cie.2020.106649
- Jul 12, 2020
- Computers & Industrial Engineering
Improved many-objective particle swarm optimization algorithm for scientific workflow scheduling in cloud computing
- Research Article
109
- 10.1016/j.jpdc.2016.11.003
- Nov 11, 2016
- Journal of Parallel and Distributed Computing
Resource provisioning and work flow scheduling in clouds using augmented Shuffled Frog Leaping Algorithm
- Conference Article
46
- 10.1109/ecows.2011.27
- Sep 1, 2011
The scheduling of workflow applications involves the mapping of individual workflow tasks to computational resources, based on a range of functional and non-functional quality of service requirements. Workflow applications such as scientific workflows often require extensive computational processing and generate significant amounts of experimental data. The emergence of cloud computing has introduced a utility-type market model, where computational resources of varying capacities can be procured on demand, in a pay-per-use fashion. In workflow based applications dependencies exist amongst tasks which requires the generation of schedules in accordance with defined precedence constraints. These constraints pose a difficult planning problem, where tasks must be scheduled for execution only once all their parent tasks have completed. In general the two most important objectives of workflow schedulers are the minimisation of both cost and make span. The cost of workflow execution consists of both computational costs incurred from processing individual tasks, and data transmission costs. With scientific workflows potentially large amounts of data must be transferred between compute and storage sites. This paper proposes a novel cloud workflow scheduling approach which employs a Markov Decision Process to optimally guide the workflow execution process depending on environmental state. In addition the system employs a genetic algorithm to evolve workflow schedules. The overall architecture is presented, and initial results indicate the potential of this approach for developing viable workflow schedules on the Cloud.
- Conference Article
9
- 10.1109/bigdatacongress.2019.00018
- Jul 1, 2019
List based scheduling algorithms have been proven an optimistic strategy with a shorter response time to generate feasible solutions for the workflow scheduling problem. Data-intensive and computation-intensive workflow applications have different characteristics in terms of the ratio between data transfer time and task execution time. Workflow scheduling algorithms in a cloud-based environment should adequately consider the characteristics of the underlying cloud platform such as the on-demand resource provisioning strategy, the practically unlimited compute capacities, the booting times of virtual machines, the homogeneous network and the pay-as-you-go price model to produce an optimal scheduling solution within the deadline constraint of a given workflow. In this paper, a path based scheduling algorithm, named LPOD, is proposed to find the best workflow schedule solution with minimum monetary cost in a cloud computing environment. A series of case studies have been carefully conducted using synthetic workflows based on DATAVIEW, which is a popular open-source big data workflow management system. The experimental results show that the proposed algorithm is efficient and can generate better workflow schedules than the state-of-the-art algorithms such as IC-PCP and SGX-E2C2D.
- Conference Article
9
- 10.1109/iot-siu.2018.8519931
- Feb 1, 2018
The workflow scheduling problems involve the task-resource mapping satisfying some functional and nonfunctionalquality of service. Workflow applications require high computational power and often involve a large amount of data transfer from one place to another. Furthermore, due to dependencies existed among tasks; schedules must be brought forth according to given precedence constraints. Cloud computing is a new business-oriented platform service that facilitates an infinite number of services by providing heterogeneous, virtualized resources to users based on a pay-as-you-go model with the distinctive quality of service (QoS) Constraints. Due to its market-oriented approach, conventional workflow scheduling strategies are facing new challengeslike on-demand payment,unprecendented openness and autonomy. This paperpresents anadaptive privileged multi-objective workflow scheduling algorithm (APMWSA) which optimally run the workflow execution process for minimization of total cost and makespan. This algorithm uses the concept of novel adaptive elite-based particle swarm optimization (NAEB-PSO) for taskresource mapping.A comparative study of presented algorithm is also made with some existing algorithms.
- Research Article
148
- 10.1016/j.future.2018.01.005
- Jan 8, 2018
- Future Generation Computer Systems
A GSA based hybrid algorithm for bi-objective workflow scheduling in cloud computing
- Book Chapter
5
- 10.1007/978-3-030-75657-4_9
- Jan 1, 2021
Cloud computing plays a vital role in storage and transfer of immense capacity data due to a rapid growth in size and the quantity of organizational tasks. There are many studies in which varied soft computing methods are applied to the cloud environment. In large data centers the cloud services indorse not only the energy consumption price of the substructure resources but also with a considerable growth in environmental costs. These subjects are significant requisites to decrease the energy cost and carbon footprint of cloud computing systems. To minimize energy consumption, the intelligent machines are required to achieve crossways numerous diverse machines, and strategies corresponding across the hardware and software layers to balance performance and energy, as well as to proficiently exploit multiple resources. Energy-efficient Cloud Organization Resource Allocation Framework is getting acceptance as it is paying operative consideration to cloud data management with an interpretation to achieve maximum revenue and minimum cost. The primary objective of the chapter is to conduct the systematic study and mapping of recent soft computing techniques to resolve the resource allocation and energy consumption problems in cloud computing. The chapter discuss the various soft computing techniques which are used in cloud environment for energy-resource allocation, workflow scheduling and performing the migration on cloud computing system. The first section of the chapter comprises of Introduction, motivation, background works which includes Framework for Energy and resource aware allocation using soft computing techniques, various issues, benefits of the work and application areas of soft computing techniques for cloud. The next section of the chapter highlighted the reported work which covers the detailed study of the researchers for energy efficiency and resource allocation using soft commuting techniques. The final section of the chapter discuss the comparative analysis which compares the work of different researchers by using various performance parameters such as execution time, power consumption, energy efficiency, resource utilization, response time and makespan.
- Research Article
8
- 10.1002/cpe.6761
- Dec 28, 2021
- Concurrency and Computation: Practice and Experience
Nowadays, scientists are dealing with large‐scale scientific workflows that need a high processing capacity platform to facilitate on‐time completion. Cloud computing is the ideal platform to overcome this problem as it has several resources that scientists may choose from depending on the size of their applications. However, using cloud computing requires some monetary charges. Recently cloud computing providers started a new pricing schema that offers to their users a set of resources with specific combinations of CPU frequency configurations settings and price. The selected configurations settings reflect energy consumption. Besides, the configuration selection to meet users' satisfaction (minimum cost) and providers' satisfaction (energy saving) is crucial. Therefore, a multiobjective (cost and energy) efficient mechanism is essential. In this article, we address an important novel problem concerning multiobjective deadline constrained workflow scheduling in the cloud. We first study the relationship between cost minimization and minimization of the energy consumption in a cloud environment, and then discuss, develop, and propose an algorithm with two variants to help the system satisfy both sides (users and providers) at the same time during the selection of the configuration. The proposed heuristic is evaluated using specified real‐world applications. The observed results indicate that our heuristic can reduce significantly the energy consumption and the cost at the same time.
- Conference Article
5
- 10.1109/kbei.2015.7436152
- Nov 1, 2015
Cloud computing is internet based computing paradigm that opens new opportunities for researchers to investigate its benefits and disadvantages on executing scientific applications such as workflows. Workflow scheduling on distributed systems has been widely studied over the years. Most of the proposed scheduling algorithms attempt to minimize the execution time without considering the cost of accessing resources and mostly target environments similar or equal to community Grids. But, in case of Cloud computing, usually, faster resources are more expensive than the slower one such that execution time as well as cost incurred by using a set of heterogeneous resources over cloud should be minimized. The proposed approach in this paper is based on Imperialist Competitive Algorithm (ICA). The ICA is a new evolutionary algorithm which is inspired by human's socio-political evolution. Generally this algorithm mathematically models the imperialism as a level of human's social evolution and uses this model for optimization Problems. In this paper, we develop a static cost-minimization, deadline-constrained heuristic for scheduling a scientific workflow application in a Cloud environment. Our approach considers fundamental features of IaaS providers such as on-demand resource provisioning and unlimited computing resources. The results show that our approach performs better than the PSO algorithm in terms of cost minimization and percent of meeting deadline.
- Conference Article
4
- 10.1109/icctac.2017.8249991
- Mar 1, 2017
One of the state-of-the art techniques of Cloud Computing is the type of distributed computing system and derives its features. It has been unanimously used by all the organizations as its yields enormous benefits and its features. The scalability and heterogeneity features make the Cloud most suitable for computing scientific workflow tasks as the workflow comprises thousands of tasks and deals with huge amount of data. Many scheduling algorithms have been proposed using different methods to compute the workflow tasks in cloud with different objectives such as minimal makespan, minimal cost, maximal resource utilization etc. In spite of that this paper proposes an algorithm namely Improved Workflow Scheduling using ACO (IWSACO) with variance in WFSACO (WorkFlow Scheduling using Ant Colony Optimization) using one of the swarm intelligence techniques of ACO to obtain better performance.
- Research Article
35
- 10.1016/j.simpat.2021.102328
- Apr 3, 2021
- Simulation Modelling Practice and Theory
Cost and makespan aware workflow scheduling in IaaS clouds using hybrid spider monkey optimization
- Conference Article
2
- 10.1109/comsnets56262.2023.10041331
- Jan 3, 2023
Many scientific workflows can be represented by a Directed Acyclic Graph (DAG) where each node represents a task, and there will be a directed edge between two tasks if and only if there is a dependency relationship between the two i.e. the second one can not be started unless the first one is finished. Due to the increasing computational requirements of these workflows, they are deployed on cloud computing systems. Scheduling of workflows on such systems to achieve certain goals(e.g. minimization of makespan, cost, or maximization of reliability, etc.) remains an active area of research. In this paper, we propose a scheduling algorithm for allocating the nodes of our task graph in a heterogeneous multi -cloud system. The proposed scheduler considers many practical concerns such as pricing mechanisms, discounting schemes, and reliability analysis for task execution. This is a list-based heuristic that allocates tasks based on the expected times for which VMs need to be rented for them. We have analyzed the proposed approach to understand its time requirement. We perform a large number of experiments with real-world workflows: FFT, Ligo, Epigenomics, and Random workflows and observe that the proposed scheduler outperforms the state-of-art approaches up to 12 %, 11 %, and 1.1 % in terms of cost, makespan, and reliability, respectively.
- Research Article
160
- 10.1016/j.future.2019.08.012
- Aug 12, 2019
- Future Generation Computer Systems
Neural network based multi-objective evolutionary algorithm for dynamic workflow scheduling in cloud computing
- Research Article
16
- 10.1007/s11227-019-02877-8
- May 11, 2019
- The Journal of Supercomputing
Scientific communities are motivated to schedule the data-intensive scientific workflows in multi-cloud environments, where considerable diverse resources are provided by multiple clouds and resource limitation imposed by individual clouds is overcome. However, this scheduling involves two conflicting objectives: minimizing cost and makespan. In general, dealing with such conflicting criteria is a difficult task. But fortunately recent efficient methods for solving multi-objective optimization problems motivated us to provide a multi-objective model considering minimization of cost and makespan as objectives. For solving this model, we use different scalarization procedures such as weighted-sum, Benson's scalarization and weighted min–max under different scenarios. Moreover, we investigate the stability of obtained solutions and propose a new approach for determining the most stable solution related to weighted-sum and weighted min–max as post-optimality analysis. Results indicate that our proposed weighted-sum approach outperforms the previously developed methods in terms of hypervolume.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.