HLSA – A New Hybrid List Scheduling Algorithm for Fog Computing

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract A hybrid list scheduling algorithm is applied in fog computing with heterogeneity in available resources and incoming scheduling units. The scheduling units that need to be scheduled can be independent with no precedence constraints, so that tasks can be executed in parallel. On the other hand, precedence constraints can be present between tasks and represented by a Directed Acyclic Graph (DAG). Some scheduling algorithms are efficient for independent tasks, while others excel in handling dependency workflows. This paper proposes a Hybrid List Scheduling Algorithm (HLSA) for all scheduling unit types and examines the impact of incoming scheduling unit types on the performance of the proposed algorithm. HLSA assigns priority to sensitive time tasks in a cumulative way to achieve minimum latency for sensitive IoT applications in fog computing and to get minimum makespan, computation cost, and communication cost. Also, HLSA aims to achieve the highest utilization of the exploitation of gaps in processors.

Similar Papers
  • Research Article
  • 10.1109/tcns.2021.3094786
Control Policies for Recovery of Interdependent Systems After Disruptions
  • Dec 1, 2021
  • IEEE Transactions on Control of Network Systems
  • Hemant Gehlot + 2 more

We examine a control problem where the states of the components of a system deteriorate after a disruption, if they are not being repaired by an entity. There exist a set of dependencies in the form of precedence constraints between the components, captured by a directed acyclic graph (DAG). The objective of the entity is to maximize the number of components, whose states are brought back to the fully repaired state within a given time. We prove that the general problem is NP-hard and, therefore, we characterize near-optimal control policies for special instances of the problem. We show that when the deterioration rates are larger than or equal to the repair rates and the precedence constraints are given by a DAG, it is optimal to continue repairing a component until its state reaches the fully recovered state before switching to repair any other component. Under the aforementioned assumptions and when the deterioration and the repair rates are homogeneous across all the components, we prove that a control policy that targets the healthiest component at each time-step while respecting the precedence and time constraints fully repairs at least half the number of components that would be fully repaired by an optimal policy. Finally, we prove that when the repair rates are sufficiently larger than the deterioration rates, the precedence constraints are given by a set of disjoint trees that each contain at most <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$k$</tex-math></inline-formula> nodes, and there is no time constraint, a policy that targets the component with the least value of health minus the deterioration rate at each time-step while respecting the precedence constraints fully repairs at least <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$1/k$</tex-math></inline-formula> times the number of components that would be fully repaired by an optimal policy.

  • Conference Article
  • Cite Count Icon 8
  • 10.1109/rtss46320.2019.00026
An Efficient Utilization-Based Test for Scheduling Hard Real-Time Sporadic DAG Task Systems on Multiprocessors
  • Dec 1, 2019
  • Zheng Dong + 1 more

The scheduling and schedulability analysis of real-time directed acyclic graph (DAG) task systems have received much recent attention. The DAG model can accurately represent intra-task parallelism and precedence constraints existing in many application domains. Existing techniques show that analyzing the DAG model is fundamentally more challenging compared to the ordinary sporadic task model, due to the complex intra-DAG precedence constraints which may cause rather pessimistic schedulability loss. However, such increased loss is counter-intuitive because the DAG structure shall better exploit the hardware parallelism provided by the multiprocessor platform. Our key observation is that the intra-DAG precedence constraints, if not carefully considered by the scheduling algorithm, may cause unpredictable execution behaviors of sub-tasks in a DAG and thus pessimistic analysis. In this paper, we present a set of novel scheduling and analysis techniques for better supporting hard real-time sporadic DAG tasks on multiprocessors, through smartly defining and analyzing the execution order of subtasks in each DAG. Combined with a new DAG-specific interval analysis framework, the proposed subtask ordering technique leads to a highly efficient utilization-based schedulability test. Importantly, the developed test becomes identical to the classical density test designed for the sporadic task model, if each DAG in the system has an out-degree of one (i.e., only containing a chain of subtasks). Experiments show the efficiency of the developed test, which improves schedulability upon existing utilization-based tests by over 60% on average and is often able to guarantee schedulability with little utilization loss.

  • Book Chapter
  • 10.1007/3-540-45403-9_18
A Duplication Based Compile Time Scheduling Method for Task Parallelism
  • Jan 1, 2001
  • Sekhar Darbha + 1 more

The cost of inter-processor communication is one of the major bottlenecks of a distributed memory machine (DMM) which can be offset with efficient algorithms for task partitioning and scheduling. Based on the data dependencies, the task partitioning algorithm partitions the application program into tasks and represents them in the form of a directed acyclic graph (DAG) or in compiler intermediate forms. The scheduling algorithm schedules the tasks onto individual processors of the DMM in an effort to lower the overall parallel time. It has been long proven that obtaining an optimal schedule for a generic DAG is an NP-hard problem. This chapter presents a Scalable Task Duplication based Scheduling (STDS) algorithm which can schedule the tasks of a DAG with a worst case complexity of O(|v|2), where v is the set of tasks of the DAG. STDS algorithm generates an optimal schedule for a certain class of DAGs which satisfy a Cost Relationship Condition (CRC), provided the required number of processors are available. In case the required number of processors are not available the algorithm scales the schedule down to the available number of processors. The performance of the scheduling algorithm has been evaluated by its application to practical DAGs and by comparing the parallel time of the schedule generated against the absolute or the theoretical lowerbound.

  • Research Article
  • Cite Count Icon 3
  • 10.1145/2724928.2724931
Simulation-based evaluations of DAG scheduling in hard real-time multiprocessor systems
  • Jan 22, 2015
  • ACM SIGAPP Applied Computing Review
  • Manar Qamhieh + 1 more

The scheduling of parallel real-time tasks on multiprocessor systems is more complicated than the one of independent sequential tasks, specially for the Directed Acyclic Graph (DAG) model. The complexity is due to the structure of DAG tasks and the precedence constraints between their subtasks. The trivial DAG scheduling approach is to directly apply common real-time scheduling algorithms on DAGs despite their lack of compatibility with the parallel model. Another scheduling approach, which is called the stretching method, aims at transforming each parallel DAG task in the set into a collection of independent sequential threads that are easier to be scheduled. In this paper, we are interested in analyzing global preemptive scheduling of DAGs using both approaches by showing that they are not comparable when associated with Deadline Monotonic (DM) and Earliest Deadline First (EDF) scheduling algorithms. Then we use extensive simulations to evaluate their schedulability performance. To this end, we use our simulation tool YARTISS to generate random DAG tasks with many parameter variations so as to guarantee reliable experimental results.

  • Research Article
  • Cite Count Icon 56
  • 10.1016/j.comnet.2020.107731
A clogging resistant secure authentication scheme for fog computing services
  • Dec 7, 2020
  • Computer Networks
  • Zeeshan Ali + 5 more

A clogging resistant secure authentication scheme for fog computing services

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/trustcom.2016.0319
A Hierarchic Hybrid Scheduling Algorithm for Static Task with Precedence Constraints
  • Aug 1, 2016
  • Yuanyuan Xie + 2 more

In distributed computing, the task scheduling problem is an NP-hard problem. Satisfying the task dependencies strictly and minimizing the execution time are the desired aims of the task scheduling algorithm. However, most scheduling algorithms focuses on minimizing the execution time without a clear strategy to preserve the precedence constraints. In this paper, a new two-phase algorithm called Hierarchic Hybrid Heuristic-Genetic Scheduling (H3GS) is introduced and developed for heterogeneous distributed computing systems (HeDCSs). In the first phase, the paper proposes a improvement list-based heuristic algorithm, called H2EFT, and the main principle of the algorithm is to divide the Directed Acyclic Graph (DAG) into levels, which simplifies the task dependencies and preserves the precedence constraints of tasks. The second phase implements a hierarchic genetic algorithm, called HGAS, which proceeds to evolve shorter schedules by inserting the H2EFT schedule into the initial populations and repairing the invalid schedule on the basis of the DAG hierarchy. Consequently, the H3GS algorithm delivers high quality solution in reasonable computing time based on a hierarchical DAG. The comparison performance results show that H3GS has outperformed the HEFT, H2EFT, and H2GS algorithms both in efficiency, complexity, and quality.

  • Conference Article
  • Cite Count Icon 2
  • 10.5121/csit.2014.4306
Multiple DAG Applications Scheduling on a Cluster of Processors
  • Mar 7, 2014
  • Uma Boregowda + 1 more

Many computational solutions can be expressed as Directed Acyclic Graph (DAG), in which nodes represent tasks to be executed and edges represent precedence constraints among tasks. A Cluster of processors is a shared resource among several users and hence the need for a scheduler which deals with multi-user jobs presented as DAGs. The scheduler must find the number of processors to be allotted for each DAG and schedule tasks on allotted processors. In this work, a new method to find optimal and maximum number of processors that can be allotted for a DAG is proposed. Regression analysis is used to find the best possible way to share available processors, among suitable number of submitted DAGs. An instance of a scheduler for each DAG, schedules tasks on the allotted processors. Towards this end, a new framework to receive online submission of DAGs, allot processors to each DAG and schedule tasks, is proposed and experimented using a simulator. This space-sharing of processors among multiple DAGs shows better performance than the other methods found in literature. Because of spacesharing, an online scheduler can be used for each DAG within the allotted processors. The use of online scheduler overcomes the drawbacks of static scheduling which relies on inaccurate estimated computation and communication costs. Thus the proposed framework is a promising solution to perform online scheduling of tasks using static information of DAG, a kind of hybrid scheduling.

  • Research Article
  • Cite Count Icon 5
  • 10.1109/tsc.2022.3192095
Scheduling Precedence Constrained Tasks for Mobile Applications in Fog Computing
  • Jan 1, 2022
  • IEEE Transactions on Services Computing
  • Keqin Li

We consider scheduling precedence constrained tasks of a mobile application in a fog computing environment, which faces multiple challenges of precedence constraints, power allocation, and performance-cost tradeoff. Our strategies to handle the three challenges are described as follows. First, in pre-power-allocation algorithms and post-power-allocation algorithms, precedence constraints are handled by the classic list scheduling algorithm and the level-by-level scheduling method respectively. Second, in a pre-power-allocation algorithm (a post-power-allocation algorithm, respectively), a power allocation strategy is determined before (after, respectively) a computation offloading strategy is decided. Third, the performance-cost tradeoff is dealt with by defining the energy-constrained scheduling problem and the time-constrained scheduling problem. That is, between performance and cost, we fix one and minimize the other. The main contributions of the present paper are highlighted as follows. We develop a class of pre-power-allocation algorithms for both energy-constrained and time-constrained scheduling, which are based on the classic list scheduling algorithm and the equal-energy method. We develop a class of post-power-allocation algorithms for both energy-constrained and time-constrained scheduling, which are based on the level-by-level scheduling method and our previously proposed algorithms for independent tasks. We evaluate the proposed algorithms by extensive experiments on mobile applications with randomly generated directed acyclic graphs and identify the most effective and efficient heuristic algorithms. Our research in this paper studies computation offloading in the context of traditional task scheduling while incorporating new and unique features of fog computing into consideration. To the author&#x2019;s best knowledge, there has been no such and similar study in the current literature.

  • Research Article
  • Cite Count Icon 8
  • 10.18280/isi.260208
A Survey on Various Methods and Algorithms of Scheduling in Fog Computing
  • Apr 30, 2021
  • Ingénierie des systèmes d information
  • Raouf Belmahdi + 2 more

The rapid deployment of IoT in different areas generates a massive amount of data transferred to the Cloud. To solve this challenge a new paradigm, called Fog Computing, is located at the edge of the network and close to the connected objects. Its main role is to extend the capacities of Cloud and improve the performance and the QoS required by the applications by the use of different methods and techniques based on scheduling algorithms. In this paper, we review various recent studies available in the literature that are interested in the scheduling methods and algorithms used in Fog computing. The use of fog layer, in solving optimization problem, is faced with serious challenges. Therefore, to help practitioners and researchers, we present an in-depth overview of Fog Computing studying various scheduling methods and algorithms. We analyze, compare and classify these different scheduling approaches according to the nature of the algorithm used in the scheduling, the QoS optimized by the proposed approach and the type of applications in order to show what is suitable for critical IoT (CIOT), massive IoT (MIOT) and Industry IoT (IIOT). Finally, we present a comparison of the different simulation tools used to evaluate these approaches to guide fog computing developers/researchers which tool is suitable and most flexible for simulating the application under consideration.

  • Book Chapter
  • Cite Count Icon 10
  • 10.1007/978-3-642-30111-7_93
DAGITIZER – A Tool to Generate Directed Acyclic Graph through Randomizer to Model Scheduling in Grid Computing
  • Jan 1, 2012
  • D I George Amalarethinam + 1 more

Scheduling is absolutely the resource management. A group of interdependent jobs/tasks forms the workflow application and scheduling is to map the jobs/tasks on to the collection of heterogeneous resources available in a massive geographic spread. Most complicated applications consist of interdependent jobs that coordinate to solve a problem. The completion of a particular job is the criterion function essentially to meet in order to start the execution of those jobs that depend upon it [1]. This kind of workflow application may be represented in the form of a Directed Acyclic Graph (DAG). Grid Workflow is such an application and is modeled by DAG. This paper proposes a tool that generates Directed Acyclic Graph through Randomizer, which helps in solving the scheduling problem among the dependent tasks by considering the parameters, computation cost (COMPCost) of the nodes and the communication cost (COMMCost) between the nodes. This tool is developed in Java, considering it as a platform independent and web authoring application developer. The task dependencies are made random, the computation cost and communication cost are also randomly allocated by the randomizer. The output generated by the tool includes (i) a visual component of an actual DAG,(ii) a table with complete information on task, its predecessors, COMPCost, COMMCost and (iii) detailed description about the number of levels, number of tasks at each level, identification of a tasks in a level and relationship between the nodes.

  • Conference Article
  • Cite Count Icon 2
  • 10.1145/2663761.2664236
An experimental analysis of DAG scheduling methods in hard real-time multiprocessor systems
  • Oct 5, 2014
  • Manar Qamhieh + 1 more

International audience

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.1155/2014/202843
Decentralized Scheduling Algorithm for DAG Based Tasks on P2P Grid
  • Jan 1, 2014
  • Journal of Engineering
  • Piyush Chauhan + 1 more

Complex problems consisting of interdependent subtasks are represented by a direct acyclic graph (DAG). Subtasks of this DAG are scheduled by the scheduler on various grid resources. Scheduling algorithms for grid strive to optimize the schedule. Nowadays a lot of grid resources are attached by P2P approach. Grid systems and P2P model both are newfangled distributed computing approaches. Combining P2P model and grid systems we get P2P grid systems. P2P grid systems require fully decentralized scheduling algorithm, which can schedule interreliant subtasks among nonuniform computational resources. Absence of central scheduler caused the need for decentralized scheduling algorithm. In this paper we have proposed scheduling algorithm which not only is fruitful in optimizing schedule but also does so in fully decentralized fashion. Hence, this unconventional approach suits well for P2P grid systems. Moreover, this algorithm takes accurate scheduling decisions depending on both computation cost and communication cost associated with DAG’s subtasks.

  • Research Article
  • Cite Count Icon 40
  • 10.1006/jpdc.1997.1376
A Task Duplication Based Scalable Scheduling Algorithm for Distributed Memory Systems
  • Oct 1, 1997
  • Journal of Parallel and Distributed Computing
  • Sekhar Darbha + 1 more

A Task Duplication Based Scalable Scheduling Algorithm for Distributed Memory Systems

  • Research Article
  • Cite Count Icon 42
  • 10.1007/s00607-021-00935-9
An improved list-based task scheduling algorithm for fog computing environment
  • Mar 27, 2021
  • Computing
  • R Madhura + 2 more

A high-performance execution of programs predominately depends on the efficient scheduling of tasks. An application consists of a sequence of tasks that can be represented as a directed acyclic graph (DAG). The tasks in the DAG have precedence constraints between them and each task has a different timeline on different processors. In this paper, a new list-based scheduling algorithm is proposed which schedules the tasks which are represented as a DAG structure. The main focus of this algorithm is to schedule the tasks to the suitable processing node in fog environment as the fog nodes have limited processing capacity. The assignment of tasks on the fog node should consider both the computation cost of the node and the execution finishing time of the node. The proposed algorithm has three phases. (1) the level sorting phase, where the independent tasks are identified (2) in the Task prioritization phase the proposed algorithm assigns priority to the task which has more successors so that more tasks in the next level can start their execution and (3) in the task selection phase a balanced combination of local optimal and global optimal approach is considered to assign a task to a suitable processor which further enhances the processor selection phase results in minimizing both the makespan and overall computation cost of the processors. Extensive experiments are carried out using randomly generated graphs and graphs from the real-world to analyze the performance of the proposed algorithm. The results show that the proposed algorithm outperforms all other well-known algorithms like predict earliest finish time, heterogeneous earliest finish time algorithm, minimal optimistic processing time, and SDBBATS in terms of performance matrices like average scheduling length ratio, speedup, and makespan.

  • Research Article
  • 10.33545/27075907.2020.v1.i2a.16
A novel task scheduling algorithm with improved make span based on prediction of tasks computation time algorithm for cloud computing
  • Jul 1, 2020
  • International Journal of Cloud Computing and Database Management
  • Maddela Kavya

In this paper new scheduling algorithm called Prediction of Tasks Computation Time algorithm (PTCT) to estimate minimum task execution time/Makespan time for cloud computing environment. Now-a-days all cloud service providers providing all resources to end users at very cheap rate and at the same time by designing scheduling algorithms cloud service providers are ensuring that all users can get response data in quick time. Various scheduling algorithms are implemented in cloud environment such MINMIN, MAXMIN, QOS GUIDE etc. MINMIN algorithm will schedule all task with less execution time first and then schedule remaining task. In simple terms MINMIN algorithm give priority to less execution time task. MAXMIN algorithm will schedule all task with more execution time and then schedule small execution time task. In simple terms MAXMIN give priority to high execution time first. Many more scheduling algorithms are there but above two algorithms are very much popular. This two algorithms will not look for resources which can take minimum execution time and propose PTCT algorithm will look for all resources/processors/machines and then form a matrix which contains estimated execution time for all jobs and then by applying PCA (Principal Component Analysis) algorithm it will predict or choose resource which took minimum execution time and then assign new task to that selected minimum execution time resource. Here resource could be computer or processor or Virtual Machine. In propose PTCT algorithm we build an array with all task and processors as Directed Acyclic Graph (DAG) and then build a matrix with all processors and task. A matrix will contain estimated execution task time on each processor and all rows of a matrix will filled with all processor’s execution time for all tasks. On generated matrix we will apply PCA algorithm to choose processor which take less execution time for selected task. This process continues till all task assigned to all processors. By applying PTCT algorithm we can further decrease computation and communication cost at cloud side. To implement this paper, we design 3 algorithms in the form of simulation and then compare execution/Make span time between them. In all 3 algorithms PTCT algorithms took less execution time for all tasks.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.