Abstract

Dynamic resource allocation problem (DRAP) with unknown cost functions and unknown resource transition functions is studied in this article. The goal of the agents is to minimize the sum of cost functions over given time periods in a distributed way, that is, by only exchanging information with their neighboring agents. First, we propose a distributed Q -learning algorithm for DRAP with unknown cost functions and unknown resource transition functions under discrete local feasibility constraints (DLFCs). It is theoretically proved that the joint policy of agents produced by the distributed Q -learning algorithm can always provide a feasible allocation (FA), that is, satisfying the constraints at each time period. Then, we also study the DRAP with unknown cost functions and unknown resource transition functions under continuous local feasibility constraints (CLFCs), where a novel distributed Q -learning algorithm is proposed based on function approximation and distributed optimization. It should be noted that the update rule of the local policy of each agent can also ensure that the joint policy of agents is an FA at each time period. Such property is of vital importance to execute the ε -greedy policy during the whole training process. Finally, simulations are presented to demonstrate the effectiveness of the proposed algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call