Abstract

In Round Robin Scheduling the time quantum is fixed and then processes are scheduled such that no process get CPU time more than one time quantum in one go. The performance of Round robin CPU scheduling algorithm is entirely dependent on the time quantum selected. If time quantum is too large, the response time of the processes is too much which may not be tolerated in interactive environment. If time quantum is too small, it causes unnecessarily frequent context switch leading to more overheads resulting in less throughput. In this paper a method using Manhattan distance has been proposed that decides a quantum value. The computation of the time quantum value is done by the distance or difference between the highest burst time and lowest burst time. The experimental analysis also shows that this algorithm performs better than RR algorithm and by reducing number of context switches, reducing average waiting time and also the average turna round time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call