Round Robin is considered as one of the most practically recognized process scheduling algorithms in CPU scheduling because it is simple and fair. However, the efficiency of Round Robin depends a lot upon the selection of an optimal time quantum. If the quantum is too small, then frequent context switches are needed by the CPU; therefore, the overhead increases, and thus, the average waiting time for the processes also becomes long. As a result, system performance is reduced as more and more CPU time is used up in context switching, instead of task execution. it may behave similarly to the First Come First Serve (FCFS) algorithm if the time quantum is excessively long, which leads to extended average waiting time. This paper proposes an improved Round Robin algorithm by incorporating machine learning, which optimally determines the time quantum dynamically. More precisely, the K-Nearest Neighbors algorithm will be used, with NumPy in charge of data processing, for the runtime prediction of an optimal time quantum considering characteristics of processes. The experimental results showed a considerable improvement in the parameters of average waiting, turnaround time, and the number of context switches with respect to the traditional Round Robin algorithm. Results indicated that machine learning efficiently modifies the predictable scheduling algorithms to make the scheduling process adaptive and efficient in operating systems. Key words: dynamic round robin, classical round robin, burst time, machine learning.
Read full abstract