Abstract

The dynamic priority scheduling algorithm is one of the real-time scheduling algorithms in a power system. However, it ignores the impact of the weight of each index when selecting indicators that affect scheduling performance. There is no definite objective function relation between weight parameters and scheduling performance. Hence, the heuristic algorithm is difficult to optimise the weight parameters. Aiming to solve this problem, a dynamic priority scheduling algorithm based on improved reinforcement learning (RL) is proposed for parameter optimisation. By learning from each other, the weighting parameters and the deadline miss rate (DMR), the global optimisation of weighting parameters can be achieved, but the learning efficiency of the conventional RL method is low. According to the task scheduling performance (the DMR) and the task characteristics, this study improves the RL action step and reward function, which accelerates the online learning speed and improves the optimisation ability of the RL algorithm. Experimental results show that the improved RL algorithm not only optimises the weight parameters but reduces the DMR, which reduces the number of iterations of RL. A scheduling algorithm optimised by RL can be better applied to industrial control and power system resource scheduling, which not only improves control efficiency but reduces scheduling costs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call