Abstract

A stochastic learning automaton model based on relative reward strength is proposed for solving the job scheduling problem in distributed computing systems. The scheduling approach belongs to the category of distributed algorithms. An automaton scheduler is used for each local host in the computer network to make the decision whether to accept the incoming job or transfer it to another server. The learning scheme proposed makes use of the most recent reward to each action provided by the environment. This feature means that the automaton has the capability to handle a class of uncertainty such as workload variation or incomplete system state information. Simulation results demonstrate that the performance of the proposed scheduling approach is not degraded in the case of a change in workload and is better than the approaches of Fixed Scheduling Discipline and Joining the Shortest Queue under incomplete system information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call