Abstract

The paper studies approximations and control of a processor sharing (PS) server where the service rate depends on the number of jobs occupying the server. The control of such a system is implemented by imposing a limit on the number of jobs that can share the server concurrently, with the rest of the jobs waiting in a first-in-first-out (FIFO) buffer. A desirable control scheme should strike the right balance between efficiency (operating at a high service rate) and parallelism (preventing small jobs from getting stuck behind large ones). We use the framework of heavy-traffic diffusion analysis to devise near optimal control heuristics for such a queueing system. However, although the literature on diffusion control of state-dependent queueing systems begins with a sequence of systems and an exogenously defined drift function, we begin with a finite discrete PS server and propose an axiomatic recipe to explicitly construct a sequence of state-dependent PS servers that then yields a drift function. We establish diffusion approximations and use them to obtain insightful and closed-form approximations for the original system under a static concurrency limit control policy. We extend our study to control policies that dynamically adjust the concurrency limit. We provide two novel numerical algorithms to solve the associated diffusion control problem. Our algorithms can be viewed as “average cost” iteration: The first algorithm uses binary-search on the average cost, while the second faster algorithm uses Newton-Raphson method for root finding. Numerical experiments demonstrate the accuracy of our approximation for choosing optimal or near-optimal static and dynamic concurrency control heuristics.

Highlights

  • Consider an emergency room where doctors, nurses, and diagnostic equipment make up a shared resource for admitted patients

  • The literature on diffusion control of state-dependent queueing systems begins with a sequence of systems and an exogenously defined drift function, we begin with a finite discrete processor sharing (PS) server and propose an axiomatic recipe to explicitly construct a sequence of state-dependent PS servers that yields a drift function

  • The resource sharing system examples we described fall into the category of the so-called state-dependent limited processor sharing (Sd-LPS) systems

Read more

Summary

Introduction

Consider an emergency room where doctors, nurses, and diagnostic equipment make up a shared resource for admitted patients. Human operators tend to speed up service when there is congestion. As another example, consider a typical web server or an online transaction processing system. Consider a typical web server or an online transaction processing system In such resource sharing systems, as the number of tasks ( called active threads) concurrently sharing the server increases, the server throughput initially increases because of more efficient utilization of resources. Without a limit on the number of concurrent tasks, this contention for the limited memory can lead to a phenomenon called thrashing, which causes the system throughput to drop drastically (Denning et al 1976, Blake 1982, Agrawal et al 1985, Heiss and Wagner 1991, Welsh et al 2001, Elnikety et al 2004)

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call