Abstract

We synthesize online scheduling algorithms to optimally assign a set of arriving heterogeneous tasks to heterogeneous speed-scalable processors under the single threaded computing architecture. By using dynamic speed-scaling, where each processor's speed is able to dynamically change within hardware and software processing constraints, the goal of our algorithms is to minimize the total financial cost (in dollars) of response time and energy consumption (TCRTEC) of the tasks. In our work, the processors are heterogeneous in that they may differ in their hardware specifications with respect to maximum processing rate, power function parameters and energy sources. Tasks are heterogeneous in terms of computation volume, memory and minimum processing requirements. We also consider that the unit price of response time for each task is heterogeneous because the user may be willing to pay higher/lower unit prices for certain tasks, thereby increasing/decreasing their optimum processing rates. We model the overhead loading time incurred when a task is loaded by a given processor prior to its execution and assume it to be heterogeneous as well. Under the single threaded, single buffered computing architecture, we synthesize the SBDPP algorithm and its two other versions. Its first two versions allow the user to specify the unit price of energy and response time for executing each arriving task. The algorithm's second version extends the functionality of the first by allowing the user or the OS of the computing device to further modify a task's unit price of time or energy in order to achieve a linearly controlled operation point that lies somewhere in the economy-performance mode continuum of a task's execution. The algorithm's third version operates exclusively on the latter. We briefly extend the algorithm and its versions to consider migration, where an unfinished task is paused and resumed on another processor. The SBDPP algorithm is qualitatively compared against its two other versions. The SBDPP dispatcher is analytically shown to perform better than the well known Round Robin dispatcher in terms of the TCRTEC performance metric. Through simulations we deduce a relationship between the arrival rate of tasks, number of processors and response time of tasks. Under the Single threaded, multi-buffered computing architecture we have four contributions that constitute the SMBSPP algorithm. First, we propose a novel task dispatching strategy for assigning the tasks to the processors. Second, we propose a novel preemptive service discipline called Smallest remaining Computation Volume Per unit Price of response Time (SCVPPT) to schedule the tasks on the assigned processor. Third, we propose a dynamic speed-scaling function that explicitly determines the optimum processing rate of each task. Most of the simulations consider both stochastic and deterministic traffic conditions. Our simulation results show that SCVPPT outperforms the two known service disciplines, Shortest Remaining Processing Time (SRPT) and the First Come First Serve (FCFS), in terms of minimizing the TCRTEC performance metric. The results also show that the algorithm's dispatcher drastically outperforms the well known Round Robin dispatcher with cost savings exceeding 100% even when the processors are mildly heterogeneous. Finally, analytical and simulation results show that our speed scaling function performs better than a comparable speed scaling function in current literature. Under a fixed budget of energy, we synthesize the SMBAD algorithm which uses the micro-economic laws of Supply and Demand (LSD) to heuristically adjust the unit price of energy in order to extend battery life and execute more than 50% of tasks on a single processor (under the single threaded, multi buffered computing architecture). By extending all our multiprocessor algorithms to factor independent (battery) energy sources that is associated with each processor, we analytically show that load balancing effects are induced on hetergeneous parallel processors. This happens when the unit price of energy is adjusted by the battery level of each processor in accordance with LSD. Furthermore, we show that a variation of this load balancing effect also occurs when the heterogeneous processors use a single battery as long as they operate at unconstrained processing rates.

Highlights

  • 5.2 Problem Formulation5.2.1 Processing Streams with Multiple Buffers5.2.2 The Cost Function of the j-th Processing Stream5.2.3

  • By extending all our multiprocessor algorithms to factor independent energy sources that is associated with each processor, we analytically show that load balancing effects are induced on heterogeneous parallel processors

  • We focused on single buffer, single threading where no processor executes more than a single task at any given time

Read more

Summary

Motivation

Energy consumption is a major constraint in today’s computing devices. A principal engineer at Google alerts us that in the few years, power costs could substantially exceed (server) hardware costs under the current trend of performance and power consumption [16]. Portable battery life can be extended by higher capacity batteries or through remote execution [55] On the go, it can be extended by portable energy restoration devices such as solar panel chargers produced and sold by XTG Technology [67]. From an algorithmic perspective, computing devices can use variable speed processors to regulate the energy consumption and completion time of executing jobs/tasks. Some speed scaling algorithms factor both time and energy consumption of tasks [1, 68]. This thesis primarily investigates how to (online) schedule arriving heterogeneous tasks to run on multiple, heterogeneous, speed-scalable processors with the goal of minimizing the financial cost of response time and energy consumption of tasks. In a later chapter of this thesis, we allow the unit price of energy for all tasks to be heuristically adjusted by the micro economic laws of demand and supply so as to conserve energy and improve load balancing on heterogeneous processors

Research Overview
Related Works
Thesis Contribution
Chapter 4
Introduction
Speed Scaling
PDM (Under Static Speed Scaling) For Single Processors
PDM Problem Scenario
Competitive Analysis (Relevant to PDM)
PDM for Two States
PDM for Multiple States
Dynamic Speed Scaling (Single Processors)
Competitive Analysis (Relevant to Dynamic Speed Scaling)
Deadline Based Scheduling (Single Processor)
Overview of
Deadline Based Scheduling Under Maximum Processing Rate Constraints (Single Processor)
Minimizing Temperature (Single Processor)
Minimizing Flow time (Single Processor)
FTPE - Unweighted
FTPE - Fractionally Weighed
FTPE - Weighed
Multithreading (Processor sharing) Extension
Flow Time Plus Energy (FTPE) For Heterogeneous Multi Processors
Chapter 3: Theoretical Framework
A Processing Stream r
Stream Processor r
Memory Queue r
Modeling Processing Rate and Execution
Description of a Task's Computation Volume upon
TCRTEC Performance Metric
Distinguishing our
Does Not Factor Battery
Chapter 4: Cost Minimization For Scheduling
Minimized Cost Function of the jth processing stream
Minimized Constrained Cost Function of the jth processing stream
Single-Buffer Decision & Parallel Processing Algorithm (SBDPP)
Minimized Constrained Cost Function Using The Power Sensitivity
Single Buffer Assisted Decision & Processing Algorithm (SBADPA)
Conclusions
The Cost Function of the j th Processing Stream
The Minimized Cost Function of the j th Processing Stream
The Minimized Constrained Cost Function of the j th Processing
A Simple
Simulation I
Simulation II
Simulation III
Analytically Comparing OSTSSF to a Competitive Speed Scaling Function in Current Literature
Simulation IV
Managing the Remaining Battery Energy Percentage
Price D2
Price D1
Problem Formulation r
E EE cap cap
Minimized Constrained Cost
Mobile Hardware Parameters For Multiple Energy Sources
Extending The SBADPA Algorithm to Include EPARBEB
Single-threading Multi-buffer Scheduling & Processing Algorithm (SMBSPP) under EPARBEP and UEP modes
Effects of the EPARBEB and UEP Modes on the Speed Scaling functions of the Algorithms
Effects of the EPARBEB and UEP Modes on the Dispatchers of the Algorithms
Chapter 7: Conclusion
Single buffered Processors
Multi buffered Processors
Findings
Laws Of Supply & Demand and Energy Sources
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call