Abstract
It has been claimed on the basis of empirical studies that a CPU scheduling policy, whereby I/Obound jobs are given preemptive priority over CPU-bound jobs, produces the highest overall CPU utilization of multiprogrammed computer systems. However, a theoretical result has shown that the CPU utilization is independent of CPU scheduling in a finite-source queuing model of multiprogrammed systems. This paper aims at resolving this seeming conflict and gaining some insight into CPU scheduling by analyzing a Markovian model of job-stream processing. The model studied consists of an infinite backlog of jobs of two classes (a job stream} and a multiple-resource system (the model of a multiprogrammed system which processes the job stream}; the system consists of a cyclic queue of two nodes--a single (CPU) server and an infinite (I/O) server. The system contains a fixed number of jobs concurrently; the values of parameters describing each job class are distinct except for the mean I/O service time. The following result is obtained under the near-complete decomposability of the model: The maximum overall CPU server utilization (and the maximum throughput) is achieved by scheduling policy whereby jobs of the class having a shorter mean CPU service time are given preemptive priority over others at the CPU server, although the CPU server utilization is independent of CPU scheduling so long as the set of jobs in the system remains fixed.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.